Nov 28 16:58:16 crc systemd[1]: Starting Kubernetes Kubelet... Nov 28 16:58:16 crc restorecon[4680]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 16:58:16 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:17 crc restorecon[4680]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:17 crc restorecon[4680]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 28 16:58:18 crc kubenswrapper[5024]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 16:58:18 crc kubenswrapper[5024]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 28 16:58:18 crc kubenswrapper[5024]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 16:58:18 crc kubenswrapper[5024]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 16:58:18 crc kubenswrapper[5024]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 28 16:58:18 crc kubenswrapper[5024]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.356727 5024 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361120 5024 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361172 5024 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361182 5024 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361190 5024 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361198 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361273 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361282 5024 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361288 5024 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361293 5024 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361299 5024 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361304 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361309 5024 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361315 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361320 5024 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361325 5024 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361332 5024 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361337 5024 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361347 5024 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361352 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361358 5024 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361365 5024 feature_gate.go:330] unrecognized feature gate: Example Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361371 5024 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361377 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361382 5024 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361388 5024 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361405 5024 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361410 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361416 5024 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361421 5024 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361430 5024 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361436 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361441 5024 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361447 5024 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361452 5024 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361458 5024 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361463 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361469 5024 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361474 5024 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361482 5024 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361491 5024 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361497 5024 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361505 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361511 5024 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361520 5024 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361527 5024 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361536 5024 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361542 5024 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361549 5024 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361555 5024 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361561 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361567 5024 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361573 5024 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361578 5024 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361587 5024 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361592 5024 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361598 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361604 5024 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361609 5024 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361614 5024 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361620 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361625 5024 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361630 5024 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361635 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361644 5024 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361655 5024 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361663 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361669 5024 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361675 5024 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361680 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361686 5024 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.361693 5024 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.361851 5024 flags.go:64] FLAG: --address="0.0.0.0" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.361864 5024 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.361877 5024 flags.go:64] FLAG: --anonymous-auth="true" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.361887 5024 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.361926 5024 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362190 5024 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362202 5024 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362210 5024 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362217 5024 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362223 5024 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362230 5024 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362237 5024 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362243 5024 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362249 5024 flags.go:64] FLAG: --cgroup-root="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362255 5024 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362261 5024 flags.go:64] FLAG: --client-ca-file="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362267 5024 flags.go:64] FLAG: --cloud-config="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362273 5024 flags.go:64] FLAG: --cloud-provider="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362280 5024 flags.go:64] FLAG: --cluster-dns="[]" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362290 5024 flags.go:64] FLAG: --cluster-domain="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362297 5024 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362304 5024 flags.go:64] FLAG: --config-dir="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362310 5024 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362316 5024 flags.go:64] FLAG: --container-log-max-files="5" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362324 5024 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362330 5024 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362336 5024 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362343 5024 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362351 5024 flags.go:64] FLAG: --contention-profiling="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362359 5024 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362366 5024 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362372 5024 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362379 5024 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362387 5024 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362393 5024 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362399 5024 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362405 5024 flags.go:64] FLAG: --enable-load-reader="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362412 5024 flags.go:64] FLAG: --enable-server="true" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362418 5024 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362427 5024 flags.go:64] FLAG: --event-burst="100" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362433 5024 flags.go:64] FLAG: --event-qps="50" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362439 5024 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362445 5024 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362452 5024 flags.go:64] FLAG: --eviction-hard="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362459 5024 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362465 5024 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362471 5024 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362478 5024 flags.go:64] FLAG: --eviction-soft="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362484 5024 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362490 5024 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362497 5024 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362504 5024 flags.go:64] FLAG: --experimental-mounter-path="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362510 5024 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362516 5024 flags.go:64] FLAG: --fail-swap-on="true" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362522 5024 flags.go:64] FLAG: --feature-gates="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362530 5024 flags.go:64] FLAG: --file-check-frequency="20s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362537 5024 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362543 5024 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362549 5024 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362555 5024 flags.go:64] FLAG: --healthz-port="10248" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362562 5024 flags.go:64] FLAG: --help="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362568 5024 flags.go:64] FLAG: --hostname-override="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362574 5024 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362581 5024 flags.go:64] FLAG: --http-check-frequency="20s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362587 5024 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362593 5024 flags.go:64] FLAG: --image-credential-provider-config="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362599 5024 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362605 5024 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362612 5024 flags.go:64] FLAG: --image-service-endpoint="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362618 5024 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362624 5024 flags.go:64] FLAG: --kube-api-burst="100" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362630 5024 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362637 5024 flags.go:64] FLAG: --kube-api-qps="50" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362643 5024 flags.go:64] FLAG: --kube-reserved="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362648 5024 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362654 5024 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362660 5024 flags.go:64] FLAG: --kubelet-cgroups="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362666 5024 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362672 5024 flags.go:64] FLAG: --lock-file="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362678 5024 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362684 5024 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362690 5024 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362699 5024 flags.go:64] FLAG: --log-json-split-stream="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362706 5024 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362712 5024 flags.go:64] FLAG: --log-text-split-stream="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362718 5024 flags.go:64] FLAG: --logging-format="text" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362724 5024 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362731 5024 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362737 5024 flags.go:64] FLAG: --manifest-url="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362743 5024 flags.go:64] FLAG: --manifest-url-header="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362752 5024 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362758 5024 flags.go:64] FLAG: --max-open-files="1000000" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362767 5024 flags.go:64] FLAG: --max-pods="110" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362774 5024 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362781 5024 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362788 5024 flags.go:64] FLAG: --memory-manager-policy="None" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362794 5024 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362800 5024 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362806 5024 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362812 5024 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362831 5024 flags.go:64] FLAG: --node-status-max-images="50" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362837 5024 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362843 5024 flags.go:64] FLAG: --oom-score-adj="-999" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362849 5024 flags.go:64] FLAG: --pod-cidr="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362855 5024 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362865 5024 flags.go:64] FLAG: --pod-manifest-path="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362871 5024 flags.go:64] FLAG: --pod-max-pids="-1" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362878 5024 flags.go:64] FLAG: --pods-per-core="0" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362883 5024 flags.go:64] FLAG: --port="10250" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362890 5024 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362896 5024 flags.go:64] FLAG: --provider-id="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362901 5024 flags.go:64] FLAG: --qos-reserved="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362907 5024 flags.go:64] FLAG: --read-only-port="10255" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362914 5024 flags.go:64] FLAG: --register-node="true" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362920 5024 flags.go:64] FLAG: --register-schedulable="true" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362926 5024 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362936 5024 flags.go:64] FLAG: --registry-burst="10" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362942 5024 flags.go:64] FLAG: --registry-qps="5" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362948 5024 flags.go:64] FLAG: --reserved-cpus="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362955 5024 flags.go:64] FLAG: --reserved-memory="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362970 5024 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362977 5024 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362985 5024 flags.go:64] FLAG: --rotate-certificates="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362993 5024 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.362999 5024 flags.go:64] FLAG: --runonce="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363005 5024 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363013 5024 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363046 5024 flags.go:64] FLAG: --seccomp-default="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363053 5024 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363060 5024 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363066 5024 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363073 5024 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363079 5024 flags.go:64] FLAG: --storage-driver-password="root" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363086 5024 flags.go:64] FLAG: --storage-driver-secure="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363092 5024 flags.go:64] FLAG: --storage-driver-table="stats" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363098 5024 flags.go:64] FLAG: --storage-driver-user="root" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363104 5024 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363111 5024 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363117 5024 flags.go:64] FLAG: --system-cgroups="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363123 5024 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363133 5024 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363139 5024 flags.go:64] FLAG: --tls-cert-file="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363145 5024 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363153 5024 flags.go:64] FLAG: --tls-min-version="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363159 5024 flags.go:64] FLAG: --tls-private-key-file="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363166 5024 flags.go:64] FLAG: --topology-manager-policy="none" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363172 5024 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363179 5024 flags.go:64] FLAG: --topology-manager-scope="container" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363185 5024 flags.go:64] FLAG: --v="2" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363200 5024 flags.go:64] FLAG: --version="false" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363209 5024 flags.go:64] FLAG: --vmodule="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363216 5024 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363223 5024 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363383 5024 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363390 5024 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363397 5024 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363403 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363409 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363415 5024 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363420 5024 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363426 5024 feature_gate.go:330] unrecognized feature gate: Example Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363432 5024 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363437 5024 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363443 5024 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363449 5024 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363454 5024 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363459 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363464 5024 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363469 5024 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363474 5024 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363480 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363485 5024 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363490 5024 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363497 5024 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363504 5024 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363509 5024 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363516 5024 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363521 5024 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363530 5024 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363535 5024 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363540 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363547 5024 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363552 5024 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363557 5024 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363563 5024 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363568 5024 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363574 5024 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363580 5024 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363587 5024 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363593 5024 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363599 5024 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363606 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363613 5024 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363618 5024 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363625 5024 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363630 5024 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363636 5024 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363642 5024 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363647 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363653 5024 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363679 5024 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363686 5024 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363691 5024 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363698 5024 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363705 5024 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363711 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363716 5024 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363721 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363727 5024 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363732 5024 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363740 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363745 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363751 5024 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363756 5024 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363761 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363766 5024 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363771 5024 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363776 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363781 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363787 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363792 5024 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363799 5024 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363806 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.363811 5024 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.363820 5024 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.371061 5024 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.371103 5024 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371199 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371209 5024 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371217 5024 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371227 5024 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371233 5024 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371238 5024 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371244 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371249 5024 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371253 5024 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371259 5024 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371263 5024 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371268 5024 feature_gate.go:330] unrecognized feature gate: Example Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371272 5024 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371276 5024 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371281 5024 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371285 5024 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371289 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371294 5024 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371298 5024 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371302 5024 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371307 5024 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371312 5024 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371316 5024 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371321 5024 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371325 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371330 5024 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371335 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371340 5024 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371344 5024 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371350 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371355 5024 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371361 5024 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371365 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371369 5024 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371374 5024 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371378 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371383 5024 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371387 5024 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371391 5024 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371396 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371401 5024 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371407 5024 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371413 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371418 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371423 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371427 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371432 5024 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371439 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371444 5024 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371449 5024 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371455 5024 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371461 5024 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371465 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371470 5024 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371475 5024 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371480 5024 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371485 5024 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371490 5024 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371495 5024 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371500 5024 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371505 5024 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371510 5024 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371514 5024 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371520 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371525 5024 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371532 5024 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371537 5024 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371543 5024 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371549 5024 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371554 5024 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371558 5024 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.371567 5024 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371778 5024 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371790 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371796 5024 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371801 5024 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371807 5024 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371811 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371816 5024 feature_gate.go:330] unrecognized feature gate: Example Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371821 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371826 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371831 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371836 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371841 5024 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371845 5024 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371850 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371857 5024 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371862 5024 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371867 5024 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371872 5024 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371877 5024 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371881 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371886 5024 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371891 5024 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371896 5024 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371902 5024 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371908 5024 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371913 5024 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371918 5024 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371922 5024 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371927 5024 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371932 5024 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371937 5024 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371942 5024 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371946 5024 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371950 5024 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371955 5024 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371960 5024 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371964 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371969 5024 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371973 5024 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371978 5024 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371983 5024 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371988 5024 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371995 5024 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.371999 5024 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372003 5024 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372008 5024 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372013 5024 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372033 5024 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372038 5024 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372043 5024 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372047 5024 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372052 5024 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372057 5024 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372062 5024 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372066 5024 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372071 5024 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372075 5024 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372080 5024 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372084 5024 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372089 5024 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372093 5024 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372097 5024 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372103 5024 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372108 5024 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372113 5024 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372119 5024 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372124 5024 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372129 5024 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372133 5024 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372138 5024 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.372142 5024 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.372151 5024 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.372589 5024 server.go:940] "Client rotation is on, will bootstrap in background" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.378048 5024 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.378181 5024 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.378814 5024 server.go:997] "Starting client certificate rotation" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.378844 5024 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.379002 5024 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-10 00:07:57.597348404 +0000 UTC Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.379144 5024 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1015h9m39.218207847s for next certificate rotation Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.383326 5024 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.385114 5024 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.395222 5024 log.go:25] "Validated CRI v1 runtime API" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.410643 5024 log.go:25] "Validated CRI v1 image API" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.412269 5024 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.415396 5024 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-28-16-54-08-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.415428 5024 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.429787 5024 manager.go:217] Machine: {Timestamp:2025-11-28 16:58:18.428553995 +0000 UTC m=+0.477474920 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:fe25c19c-2a8b-43d8-b80c-708649046fac BootID:e109ddab-de02-41b4-a5ee-6ddddeff5610 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ca:31:cb Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ca:31:cb Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:de:c7:7d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:37:32:f1 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:fa:11:d3 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:39:70:a9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:76:c4:c6:ba:53:cd Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d2:f2:1b:09:86:84 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.430030 5024 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.430155 5024 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.430420 5024 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.430573 5024 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.430609 5024 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.430804 5024 topology_manager.go:138] "Creating topology manager with none policy" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.430815 5024 container_manager_linux.go:303] "Creating device plugin manager" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.431004 5024 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.431045 5024 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.431324 5024 state_mem.go:36] "Initialized new in-memory state store" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.431482 5024 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.432245 5024 kubelet.go:418] "Attempting to sync node with API server" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.432263 5024 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.432282 5024 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.432304 5024 kubelet.go:324] "Adding apiserver pod source" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.432318 5024 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.435282 5024 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.435724 5024 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.435808 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.141:6443: connect: connection refused Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.435805 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.141:6443: connect: connection refused Nov 28 16:58:18 crc kubenswrapper[5024]: E1128 16:58:18.435952 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.141:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:18 crc kubenswrapper[5024]: E1128 16:58:18.435988 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.141:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.441739 5024 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.442878 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.442905 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.442915 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.442925 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.442938 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.442947 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.442956 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.442971 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.442983 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.442992 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.443008 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.443032 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.443209 5024 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.443820 5024 server.go:1280] "Started kubelet" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.444068 5024 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.141:6443: connect: connection refused Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.444970 5024 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 28 16:58:18 crc systemd[1]: Started Kubernetes Kubelet. Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.445198 5024 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.446984 5024 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 28 16:58:18 crc kubenswrapper[5024]: E1128 16:58:18.448512 5024 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.141:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c3a283885acab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 16:58:18.443779243 +0000 UTC m=+0.492700148,LastTimestamp:2025-11-28 16:58:18.443779243 +0000 UTC m=+0.492700148,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.453053 5024 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.453193 5024 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.453654 5024 server.go:460] "Adding debug handlers to kubelet server" Nov 28 16:58:18 crc kubenswrapper[5024]: E1128 16:58:18.454200 5024 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.454487 5024 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.454500 5024 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.454567 5024 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.455945 5024 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:38:52.152655719 +0000 UTC Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.456053 5024 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 381h40m33.696607479s for next certificate rotation Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.456215 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.141:6443: connect: connection refused Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.456378 5024 factory.go:55] Registering systemd factory Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.456499 5024 factory.go:221] Registration of the systemd container factory successfully Nov 28 16:58:18 crc kubenswrapper[5024]: E1128 16:58:18.456180 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" interval="200ms" Nov 28 16:58:18 crc kubenswrapper[5024]: E1128 16:58:18.456531 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.141:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.460295 5024 factory.go:153] Registering CRI-O factory Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.460366 5024 factory.go:221] Registration of the crio container factory successfully Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.460634 5024 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.460678 5024 factory.go:103] Registering Raw factory Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.460705 5024 manager.go:1196] Started watching for new ooms in manager Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.461617 5024 manager.go:319] Starting recovery of all containers Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466499 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466577 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466592 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466605 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466624 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466638 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466651 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466664 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466680 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466697 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466711 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466728 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466741 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466758 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466774 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466787 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466803 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466819 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466833 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466847 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466860 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466874 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466885 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466897 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466917 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466931 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466945 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466958 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.466970 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467011 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467041 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467076 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467094 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467107 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467122 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467135 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467147 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467163 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467176 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467188 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467201 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467213 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467225 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467824 5024 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467851 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467865 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467877 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467891 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467906 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467921 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467934 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467947 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467960 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467980 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.467994 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468005 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468035 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468050 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468064 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468077 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468095 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468112 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468125 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468137 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468151 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468165 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468178 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468189 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468202 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468214 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468226 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468239 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468250 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468263 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468280 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468294 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468308 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468321 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468341 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468354 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468369 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468383 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468396 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468410 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468429 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468444 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468458 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468473 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468487 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468502 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468517 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468530 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468543 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468556 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468571 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468605 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468620 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468634 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468667 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468681 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468697 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468712 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468725 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468738 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468752 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468772 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468788 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468802 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468821 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468842 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468859 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468875 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468897 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468916 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468931 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468950 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468964 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468977 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.468991 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469005 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469062 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469077 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469090 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469104 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469118 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469132 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469147 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469162 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469178 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469192 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469206 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469224 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469245 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469259 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469273 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469511 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469534 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469547 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469559 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469572 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469585 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469600 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469612 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469625 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469641 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469653 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469665 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469678 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469724 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469741 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469755 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469769 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469785 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469802 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469815 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469828 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469845 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469857 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469869 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469882 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469898 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469910 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469924 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469936 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469952 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469966 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.469987 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470002 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470033 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470047 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470059 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470071 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470082 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470095 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470109 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470122 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470136 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470148 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470163 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470175 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470185 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470197 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470216 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470239 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470252 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470268 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470284 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470300 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470318 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470335 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470350 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470368 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470381 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470398 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470409 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470425 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470442 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470463 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470480 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470491 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470514 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470529 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470584 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470598 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470615 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470657 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470670 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470688 5024 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470752 5024 reconstruct.go:97] "Volume reconstruction finished" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.470761 5024 reconciler.go:26] "Reconciler: start to sync state" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.485820 5024 manager.go:324] Recovery completed Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.493448 5024 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.496560 5024 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.496629 5024 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.496667 5024 kubelet.go:2335] "Starting kubelet main sync loop" Nov 28 16:58:18 crc kubenswrapper[5024]: E1128 16:58:18.496800 5024 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.498498 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.141:6443: connect: connection refused Nov 28 16:58:18 crc kubenswrapper[5024]: E1128 16:58:18.498583 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.141:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.499813 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.501347 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.501465 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.501572 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.502237 5024 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.502316 5024 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.502385 5024 state_mem.go:36] "Initialized new in-memory state store" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.513580 5024 policy_none.go:49] "None policy: Start" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.514976 5024 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.515119 5024 state_mem.go:35] "Initializing new in-memory state store" Nov 28 16:58:18 crc kubenswrapper[5024]: E1128 16:58:18.555453 5024 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.568753 5024 manager.go:334] "Starting Device Plugin manager" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.568811 5024 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.568828 5024 server.go:79] "Starting device plugin registration server" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.569320 5024 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.569338 5024 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.569477 5024 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.569618 5024 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.569630 5024 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 28 16:58:18 crc kubenswrapper[5024]: E1128 16:58:18.579860 5024 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.597520 5024 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.597631 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.598880 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.598919 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.598932 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.599121 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.599466 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.599555 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.599952 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.599980 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.599993 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.600151 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.600313 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.600353 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.600503 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.600533 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.600546 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.600819 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.600851 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.600862 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.601038 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.601130 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.601168 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.601233 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.601282 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.601302 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.601787 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.601807 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.601806 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.601837 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.601848 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.601821 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.602039 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.602177 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.602248 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.602859 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.602894 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.602904 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.603077 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.603106 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.603285 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.603316 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.603328 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.603670 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.603695 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.603704 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:18 crc kubenswrapper[5024]: E1128 16:58:18.658485 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" interval="400ms" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.670458 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.671921 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.671974 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.671988 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.672058 5024 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.672472 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.672551 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.672582 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.672693 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.672798 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: E1128 16:58:18.672797 5024 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.141:6443: connect: connection refused" node="crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.672831 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.672872 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.672915 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.672949 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.672980 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.672998 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.673047 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.673069 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.673086 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.673119 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774121 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774169 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774185 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774199 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774215 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774243 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774268 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774290 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774311 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774326 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774375 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774393 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774396 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774433 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774453 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774473 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774525 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774558 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774870 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774889 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774860 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.775013 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774946 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774956 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774965 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774974 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774979 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774992 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774984 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.774937 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.873721 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.875235 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.875271 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.875279 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.875301 5024 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 16:58:18 crc kubenswrapper[5024]: E1128 16:58:18.875758 5024 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.141:6443: connect: connection refused" node="crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.919569 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.924613 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.938607 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.946984 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: I1128 16:58:18.952872 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.955156 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6ad61788e54e975fcaf9f9bedb6c53663a387b4f7fe13b9c2cb83d2153e79ee2 WatchSource:0}: Error finding container 6ad61788e54e975fcaf9f9bedb6c53663a387b4f7fe13b9c2cb83d2153e79ee2: Status 404 returned error can't find the container with id 6ad61788e54e975fcaf9f9bedb6c53663a387b4f7fe13b9c2cb83d2153e79ee2 Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.955715 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-a8e6dfca3500be69810fab2231037ca780324b7a31945e7b1260fefdec4d75c3 WatchSource:0}: Error finding container a8e6dfca3500be69810fab2231037ca780324b7a31945e7b1260fefdec4d75c3: Status 404 returned error can't find the container with id a8e6dfca3500be69810fab2231037ca780324b7a31945e7b1260fefdec4d75c3 Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.965361 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d65166b749175740d2ba58d93c48aa5143a98e838227e51abc13753f439a3bd0 WatchSource:0}: Error finding container d65166b749175740d2ba58d93c48aa5143a98e838227e51abc13753f439a3bd0: Status 404 returned error can't find the container with id d65166b749175740d2ba58d93c48aa5143a98e838227e51abc13753f439a3bd0 Nov 28 16:58:18 crc kubenswrapper[5024]: W1128 16:58:18.968995 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1c20cf0dd277465d4ff5dd0a806b07a1a9bdaaa7a009f6c3e8948d985d3e688f WatchSource:0}: Error finding container 1c20cf0dd277465d4ff5dd0a806b07a1a9bdaaa7a009f6c3e8948d985d3e688f: Status 404 returned error can't find the container with id 1c20cf0dd277465d4ff5dd0a806b07a1a9bdaaa7a009f6c3e8948d985d3e688f Nov 28 16:58:19 crc kubenswrapper[5024]: E1128 16:58:19.059996 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" interval="800ms" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.276485 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.277955 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.277998 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.278010 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.278049 5024 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 16:58:19 crc kubenswrapper[5024]: E1128 16:58:19.278502 5024 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.141:6443: connect: connection refused" node="crc" Nov 28 16:58:19 crc kubenswrapper[5024]: W1128 16:58:19.334929 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.141:6443: connect: connection refused Nov 28 16:58:19 crc kubenswrapper[5024]: E1128 16:58:19.335126 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.141:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.445614 5024 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.141:6443: connect: connection refused Nov 28 16:58:19 crc kubenswrapper[5024]: W1128 16:58:19.489126 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.141:6443: connect: connection refused Nov 28 16:58:19 crc kubenswrapper[5024]: E1128 16:58:19.489234 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.141:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.504236 5024 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018" exitCode=0 Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.504384 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018"} Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.504567 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a8e6dfca3500be69810fab2231037ca780324b7a31945e7b1260fefdec4d75c3"} Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.504731 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.505910 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.505934 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.505944 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.506593 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f"} Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.506637 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1c20cf0dd277465d4ff5dd0a806b07a1a9bdaaa7a009f6c3e8948d985d3e688f"} Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.508209 5024 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75" exitCode=0 Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.508256 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75"} Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.508298 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d65166b749175740d2ba58d93c48aa5143a98e838227e51abc13753f439a3bd0"} Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.508402 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.509124 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.509158 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.509170 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.510706 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.511503 5024 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="db541b40512a9d8af0105395534bcce4ebbeb5f1bf45280c0afc64946f033e05" exitCode=0 Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.511561 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.511590 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.511604 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.511612 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"db541b40512a9d8af0105395534bcce4ebbeb5f1bf45280c0afc64946f033e05"} Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.511670 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4a380447f54d152aa849ec761807173c92bd3e184e596733fc69f1b0e0236205"} Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.511870 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.513281 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.513325 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.513335 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.513920 5024 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f0213b699dc472ae7febacb8dce2ddb542e70dc307b3a6191c20f22a7164a4f6" exitCode=0 Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.513957 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f0213b699dc472ae7febacb8dce2ddb542e70dc307b3a6191c20f22a7164a4f6"} Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.513980 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6ad61788e54e975fcaf9f9bedb6c53663a387b4f7fe13b9c2cb83d2153e79ee2"} Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.514088 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.514912 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.514949 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:19 crc kubenswrapper[5024]: I1128 16:58:19.514960 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:19 crc kubenswrapper[5024]: W1128 16:58:19.575455 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.141:6443: connect: connection refused Nov 28 16:58:19 crc kubenswrapper[5024]: E1128 16:58:19.575848 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.141:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:19 crc kubenswrapper[5024]: W1128 16:58:19.613901 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.141:6443: connect: connection refused Nov 28 16:58:19 crc kubenswrapper[5024]: E1128 16:58:19.614104 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.141:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:19 crc kubenswrapper[5024]: E1128 16:58:19.861048 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" interval="1.6s" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.079686 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.081595 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.081629 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.081640 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.081662 5024 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 16:58:20 crc kubenswrapper[5024]: E1128 16:58:20.082199 5024 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.141:6443: connect: connection refused" node="crc" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.445306 5024 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.141:6443: connect: connection refused Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.519784 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216"} Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.519852 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3"} Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.519867 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe"} Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.520001 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.520998 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.521063 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.521078 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.523662 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525"} Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.523744 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a"} Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.523766 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e"} Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.523692 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.525339 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.525380 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.525395 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.527739 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db"} Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.527973 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72"} Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.527987 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1"} Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.530502 5024 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="790c6d25e5e108d1497005cbd1a08df6664d2f05922e99f939e0e31299853016" exitCode=0 Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.530572 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"790c6d25e5e108d1497005cbd1a08df6664d2f05922e99f939e0e31299853016"} Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.530698 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.531409 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.531447 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.531459 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.532543 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"155da67f291f7b2b01e88f859d0c5e8dad924363c72e0cbba9dbaec899a6f756"} Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.532619 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.535603 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.535719 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:20 crc kubenswrapper[5024]: I1128 16:58:20.535794 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:21 crc kubenswrapper[5024]: W1128 16:58:21.328513 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.141:6443: connect: connection refused Nov 28 16:58:21 crc kubenswrapper[5024]: E1128 16:58:21.328631 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.141:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.337081 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.445285 5024 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.141:6443: connect: connection refused Nov 28 16:58:21 crc kubenswrapper[5024]: E1128 16:58:21.462095 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" interval="3.2s" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.537599 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96"} Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.537651 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52"} Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.537696 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.538836 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.538871 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.538881 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.540872 5024 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b0eb0e257310f5b971f5bbd292aab98bdb0afedbeb38ab6edcd5003b51a96dbe" exitCode=0 Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.540936 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b0eb0e257310f5b971f5bbd292aab98bdb0afedbeb38ab6edcd5003b51a96dbe"} Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.541003 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.541058 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.541922 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.541934 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.541959 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.541954 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.541969 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.541975 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.682754 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.684184 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.684265 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.684277 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:21 crc kubenswrapper[5024]: I1128 16:58:21.684302 5024 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 16:58:22 crc kubenswrapper[5024]: I1128 16:58:22.441069 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:22 crc kubenswrapper[5024]: I1128 16:58:22.446089 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:22 crc kubenswrapper[5024]: I1128 16:58:22.548140 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f805b89004d6feac3504587239ede0386e63f5776fbecaf2ae4e397a2e9b7b4f"} Nov 28 16:58:22 crc kubenswrapper[5024]: I1128 16:58:22.548205 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c2752c5873cb62269bfe3ede5bf8d88d306ced5c6e198a0b96c3f8d3748c0f1f"} Nov 28 16:58:22 crc kubenswrapper[5024]: I1128 16:58:22.548240 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"66028a7f2194d675fd52778ac8ffa00b749e3e2272df93fa1ae4500705d2a409"} Nov 28 16:58:22 crc kubenswrapper[5024]: I1128 16:58:22.548274 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:22 crc kubenswrapper[5024]: I1128 16:58:22.548285 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:22 crc kubenswrapper[5024]: I1128 16:58:22.548347 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:22 crc kubenswrapper[5024]: I1128 16:58:22.549360 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:22 crc kubenswrapper[5024]: I1128 16:58:22.549391 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:22 crc kubenswrapper[5024]: I1128 16:58:22.549406 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:22 crc kubenswrapper[5024]: I1128 16:58:22.550067 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:22 crc kubenswrapper[5024]: I1128 16:58:22.550113 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:22 crc kubenswrapper[5024]: I1128 16:58:22.550125 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.555553 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.555617 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.556349 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.556489 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b126f470b7087ee944c80851edeee88ae97a89b1fa710a522d6ff2cb4710f983"} Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.556600 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"57347508de49dbce7e1fb1f625993ba3c9676820588c2cbe4ebbc54d0e7a46db"} Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.556640 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.556714 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.556752 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.556764 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.557172 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.557219 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.557234 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.557801 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.557857 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:23 crc kubenswrapper[5024]: I1128 16:58:23.557871 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:24 crc kubenswrapper[5024]: I1128 16:58:24.159732 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:24 crc kubenswrapper[5024]: I1128 16:58:24.557717 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:24 crc kubenswrapper[5024]: I1128 16:58:24.557726 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:24 crc kubenswrapper[5024]: I1128 16:58:24.559067 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:24 crc kubenswrapper[5024]: I1128 16:58:24.559085 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:24 crc kubenswrapper[5024]: I1128 16:58:24.559176 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:24 crc kubenswrapper[5024]: I1128 16:58:24.559194 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:24 crc kubenswrapper[5024]: I1128 16:58:24.559131 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:24 crc kubenswrapper[5024]: I1128 16:58:24.559231 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:25 crc kubenswrapper[5024]: I1128 16:58:25.955791 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:25 crc kubenswrapper[5024]: I1128 16:58:25.955958 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:25 crc kubenswrapper[5024]: I1128 16:58:25.957190 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:25 crc kubenswrapper[5024]: I1128 16:58:25.957253 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:25 crc kubenswrapper[5024]: I1128 16:58:25.957271 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:25 crc kubenswrapper[5024]: I1128 16:58:25.995316 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:25 crc kubenswrapper[5024]: I1128 16:58:25.995529 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:25 crc kubenswrapper[5024]: I1128 16:58:25.996935 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:25 crc kubenswrapper[5024]: I1128 16:58:25.996984 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:25 crc kubenswrapper[5024]: I1128 16:58:25.997002 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:26 crc kubenswrapper[5024]: I1128 16:58:26.087818 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 28 16:58:26 crc kubenswrapper[5024]: I1128 16:58:26.088056 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:26 crc kubenswrapper[5024]: I1128 16:58:26.089387 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:26 crc kubenswrapper[5024]: I1128 16:58:26.089442 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:26 crc kubenswrapper[5024]: I1128 16:58:26.089452 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:28 crc kubenswrapper[5024]: I1128 16:58:28.183309 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:28 crc kubenswrapper[5024]: I1128 16:58:28.183509 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:28 crc kubenswrapper[5024]: I1128 16:58:28.185093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:28 crc kubenswrapper[5024]: I1128 16:58:28.185123 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:28 crc kubenswrapper[5024]: I1128 16:58:28.185131 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:28 crc kubenswrapper[5024]: E1128 16:58:28.580049 5024 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 16:58:29 crc kubenswrapper[5024]: I1128 16:58:29.211406 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:29 crc kubenswrapper[5024]: I1128 16:58:29.211624 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:29 crc kubenswrapper[5024]: I1128 16:58:29.213135 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:29 crc kubenswrapper[5024]: I1128 16:58:29.213175 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:29 crc kubenswrapper[5024]: I1128 16:58:29.213184 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:29 crc kubenswrapper[5024]: I1128 16:58:29.215801 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:29 crc kubenswrapper[5024]: I1128 16:58:29.570416 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:29 crc kubenswrapper[5024]: I1128 16:58:29.571413 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:29 crc kubenswrapper[5024]: I1128 16:58:29.571471 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:29 crc kubenswrapper[5024]: I1128 16:58:29.571481 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:30 crc kubenswrapper[5024]: I1128 16:58:30.410065 5024 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 16:58:30 crc kubenswrapper[5024]: I1128 16:58:30.410152 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 16:58:31 crc kubenswrapper[5024]: E1128 16:58:31.685686 5024 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Nov 28 16:58:32 crc kubenswrapper[5024]: W1128 16:58:32.014525 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.014667 5024 trace.go:236] Trace[1886612805]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 16:58:22.009) (total time: 10004ms): Nov 28 16:58:32 crc kubenswrapper[5024]: Trace[1886612805]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10004ms (16:58:32.014) Nov 28 16:58:32 crc kubenswrapper[5024]: Trace[1886612805]: [10.004654933s] [10.004654933s] END Nov 28 16:58:32 crc kubenswrapper[5024]: E1128 16:58:32.014702 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 16:58:32 crc kubenswrapper[5024]: W1128 16:58:32.112320 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.112458 5024 trace.go:236] Trace[988991951]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 16:58:22.110) (total time: 10001ms): Nov 28 16:58:32 crc kubenswrapper[5024]: Trace[988991951]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:58:32.112) Nov 28 16:58:32 crc kubenswrapper[5024]: Trace[988991951]: [10.001433831s] [10.001433831s] END Nov 28 16:58:32 crc kubenswrapper[5024]: E1128 16:58:32.112489 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 16:58:32 crc kubenswrapper[5024]: W1128 16:58:32.115984 5024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.116132 5024 trace.go:236] Trace[1198883187]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 16:58:22.114) (total time: 10001ms): Nov 28 16:58:32 crc kubenswrapper[5024]: Trace[1198883187]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:58:32.115) Nov 28 16:58:32 crc kubenswrapper[5024]: Trace[1198883187]: [10.001454121s] [10.001454121s] END Nov 28 16:58:32 crc kubenswrapper[5024]: E1128 16:58:32.116160 5024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.212516 5024 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.212635 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.222927 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.223209 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.224467 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.224504 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.224514 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.269192 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.445244 5024 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.577889 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.579266 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.579299 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.579309 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:32 crc kubenswrapper[5024]: I1128 16:58:32.591761 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 28 16:58:33 crc kubenswrapper[5024]: I1128 16:58:33.582344 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:33 crc kubenswrapper[5024]: I1128 16:58:33.583684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:33 crc kubenswrapper[5024]: I1128 16:58:33.583755 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:33 crc kubenswrapper[5024]: I1128 16:58:33.583765 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:33 crc kubenswrapper[5024]: I1128 16:58:33.598255 5024 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 16:58:33 crc kubenswrapper[5024]: I1128 16:58:33.598352 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 16:58:33 crc kubenswrapper[5024]: I1128 16:58:33.602566 5024 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 16:58:33 crc kubenswrapper[5024]: I1128 16:58:33.602654 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 16:58:34 crc kubenswrapper[5024]: I1128 16:58:34.886376 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:34 crc kubenswrapper[5024]: I1128 16:58:34.888200 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:34 crc kubenswrapper[5024]: I1128 16:58:34.888250 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:34 crc kubenswrapper[5024]: I1128 16:58:34.888261 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:34 crc kubenswrapper[5024]: I1128 16:58:34.888292 5024 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 16:58:35 crc kubenswrapper[5024]: I1128 16:58:35.669551 5024 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 16:58:36 crc kubenswrapper[5024]: I1128 16:58:36.002596 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:36 crc kubenswrapper[5024]: I1128 16:58:36.003510 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:36 crc kubenswrapper[5024]: I1128 16:58:36.004964 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:36 crc kubenswrapper[5024]: I1128 16:58:36.005073 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:36 crc kubenswrapper[5024]: I1128 16:58:36.005095 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:36 crc kubenswrapper[5024]: I1128 16:58:36.008336 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:36 crc kubenswrapper[5024]: I1128 16:58:36.067600 5024 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 16:58:36 crc kubenswrapper[5024]: I1128 16:58:36.621982 5024 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:36 crc kubenswrapper[5024]: I1128 16:58:36.622706 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:36 crc kubenswrapper[5024]: I1128 16:58:36.622731 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:36 crc kubenswrapper[5024]: I1128 16:58:36.622740 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:38 crc kubenswrapper[5024]: I1128 16:58:38.261736 5024 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 16:58:38 crc kubenswrapper[5024]: E1128 16:58:38.580205 5024 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 16:58:38 crc kubenswrapper[5024]: E1128 16:58:38.595238 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 28 16:58:38 crc kubenswrapper[5024]: I1128 16:58:38.601388 5024 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 28 16:58:38 crc kubenswrapper[5024]: I1128 16:58:38.603357 5024 trace.go:236] Trace[1240637849]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 16:58:25.507) (total time: 13096ms): Nov 28 16:58:38 crc kubenswrapper[5024]: Trace[1240637849]: ---"Objects listed" error: 13096ms (16:58:38.603) Nov 28 16:58:38 crc kubenswrapper[5024]: Trace[1240637849]: [13.096084534s] [13.096084534s] END Nov 28 16:58:38 crc kubenswrapper[5024]: I1128 16:58:38.603392 5024 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 16:58:38 crc kubenswrapper[5024]: I1128 16:58:38.658326 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:38 crc kubenswrapper[5024]: I1128 16:58:38.766984 5024 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36806->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 28 16:58:38 crc kubenswrapper[5024]: I1128 16:58:38.767111 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36806->192.168.126.11:17697: read: connection reset by peer" Nov 28 16:58:38 crc kubenswrapper[5024]: I1128 16:58:38.771850 5024 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36824->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 28 16:58:38 crc kubenswrapper[5024]: I1128 16:58:38.771946 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36824->192.168.126.11:17697: read: connection reset by peer" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.217277 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.221071 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.456247 5024 apiserver.go:52] "Watching apiserver" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.465093 5024 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.465696 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.466142 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.466304 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.466428 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.466749 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.466874 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.466931 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.466930 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.467105 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.467140 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.511224 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.511224 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.511497 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.511637 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.520182 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.523858 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.523978 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.524152 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.524291 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.556311 5024 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.605527 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.605886 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.606049 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.606161 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.606268 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.606439 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.606540 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.606658 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.606761 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.606857 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607007 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607125 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607234 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607333 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607438 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.605994 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.606416 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.606511 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.606743 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.606881 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607135 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607310 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607679 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607323 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607488 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607531 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607545 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607824 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607851 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.607893 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.608946 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.608956 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609012 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609058 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609103 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609201 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609236 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609260 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609282 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609300 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609328 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609355 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609376 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609402 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609427 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609448 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609478 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.610725 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609308 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.610775 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.609389 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.610165 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.610637 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.610664 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.610736 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.610801 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.610829 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.610880 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.610933 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611005 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611079 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611163 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611191 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611210 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611235 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611257 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611262 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611314 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611320 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611411 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611442 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611467 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611682 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611711 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611869 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612052 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611420 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611462 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612202 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611622 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611927 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612231 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612253 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612280 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612302 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612341 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612495 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612625 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612754 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612859 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612890 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612914 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612933 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612959 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613108 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613221 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613325 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613432 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613463 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613579 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613608 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613631 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613654 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613693 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613776 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613797 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613821 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613896 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613950 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613974 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614318 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614340 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614364 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614386 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614405 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614423 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614441 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614464 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614482 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614504 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614597 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614619 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614699 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614719 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614795 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614817 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614841 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.614862 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.615184 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.615216 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.615234 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.615254 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.615276 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.615332 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.615352 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.615374 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.615501 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.615529 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.611971 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612257 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612351 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612598 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.612672 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613137 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613216 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613182 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.613641 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.615623 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 16:58:40.11557821 +0000 UTC m=+22.164499115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.616218 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.616412 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.621585 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.621620 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.621930 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.622045 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.622273 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.622516 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.624166 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.624681 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.629220 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.629460 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.630107 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.633285 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.636458 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.647557 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.649987 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.650230 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.653478 5024 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96" exitCode=255 Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.664983 5024 scope.go:117] "RemoveContainer" containerID="6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.665487 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.665738 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.665979 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.666334 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.666574 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.666758 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.667760 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.667880 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.667975 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.669230 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.669448 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.669731 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.670010 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.670278 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.670685 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.670875 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.671620 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.671771 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.671917 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.672449 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.673248 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.673553 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.673785 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.674405 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.674780 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.674814 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.675129 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.675167 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.675470 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.675632 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.675840 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.676105 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.676402 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.676522 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.676914 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.676964 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677033 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677549 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677680 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677694 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677739 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677769 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677802 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677832 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677843 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677865 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677889 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677910 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677928 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677957 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.677992 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678046 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678073 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678102 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678109 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678131 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678171 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678202 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678225 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678252 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678284 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678305 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678327 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678346 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678365 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678383 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678401 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678419 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678437 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678464 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678480 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678498 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678519 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678536 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678569 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678602 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678630 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678658 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678685 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678710 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678738 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678766 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678795 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678822 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678853 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678876 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678903 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678930 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678978 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679004 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679044 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679063 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679080 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679097 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679113 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679133 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679162 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679191 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679214 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679244 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679268 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679294 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679322 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679351 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679373 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679403 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679428 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679455 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679481 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679506 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679533 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679562 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679584 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679602 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679620 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679637 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679665 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679690 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679714 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679741 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679799 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679828 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679854 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679881 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679922 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679948 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679974 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679998 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678380 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678527 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678880 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.678900 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679148 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679365 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679705 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.679906 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.680107 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.680389 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.680439 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.680521 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.680714 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.680790 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.681039 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.681089 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.681188 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.681271 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.681515 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.681657 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.681815 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.682077 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.682010 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.682424 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.682721 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.683108 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.683133 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.683321 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.683457 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.683587 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.683612 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.683769 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.684089 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.684777 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96"} Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.685143 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.685386 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.685905 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.686292 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.687286 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.687542 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.687816 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.688087 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.688140 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.688529 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.690647 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.691033 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.691432 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.691469 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.692230 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.693776 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.694503 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.695369 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.695656 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.695827 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.695842 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.695924 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.696212 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.698917 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.699391 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.699487 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.715057 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.715392 5024 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.715127 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.715438 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.715869 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.716307 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.716299 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.716469 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.716800 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.716956 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.717109 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.717136 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.717248 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.718084 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.718228 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.718838 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.719927 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.721492 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.721582 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.721627 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.721678 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.721730 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.721773 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.721820 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.721868 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.721916 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.721963 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.721987 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722003 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722060 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722093 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.722120 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:40.222089696 +0000 UTC m=+22.271010771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722169 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722292 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722318 5024 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722345 5024 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722372 5024 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722398 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722419 5024 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722443 5024 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722465 5024 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722490 5024 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722512 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722537 5024 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722561 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722575 5024 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722589 5024 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722603 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722617 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722630 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722649 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722669 5024 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722693 5024 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722717 5024 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722737 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722765 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722792 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722812 5024 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722837 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722864 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722884 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722905 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722925 5024 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722944 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.722992 5024 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723036 5024 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723059 5024 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723078 5024 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723102 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723122 5024 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723151 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723176 5024 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723197 5024 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723218 5024 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723241 5024 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723263 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723288 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723317 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723349 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723377 5024 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.723381 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723397 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723415 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723429 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723442 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723114 5024 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.723472 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:40.223446335 +0000 UTC m=+22.272367460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723500 5024 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723519 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723537 5024 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723550 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723564 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723577 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723592 5024 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723605 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723624 5024 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723646 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723669 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723689 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723715 5024 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723735 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723757 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723776 5024 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723799 5024 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723822 5024 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723834 5024 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723847 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723860 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723873 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723886 5024 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723902 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723914 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723934 5024 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723953 5024 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.723975 5024 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724000 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724040 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724063 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724160 5024 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724179 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724220 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724242 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724271 5024 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724286 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724316 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724328 5024 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724337 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724352 5024 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724366 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724398 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724413 5024 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724426 5024 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724441 5024 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724461 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724476 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724492 5024 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724502 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724513 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724522 5024 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724533 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724542 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724556 5024 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724566 5024 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724578 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724591 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724602 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724620 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724632 5024 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724644 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724655 5024 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724667 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724678 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724688 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724697 5024 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724710 5024 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724724 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724739 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724755 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724769 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724782 5024 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724823 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724834 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724848 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724861 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724875 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724889 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724900 5024 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724914 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724927 5024 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724941 5024 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724954 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724968 5024 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724982 5024 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.724992 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.725006 5024 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.727760 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.727799 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.727815 5024 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.727832 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.727845 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.727982 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729068 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729140 5024 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729161 5024 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729175 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729192 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729205 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729219 5024 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729233 5024 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729249 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729270 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729285 5024 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729309 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729324 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729334 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729344 5024 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729359 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729372 5024 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729381 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729393 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729403 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729413 5024 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729427 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729442 5024 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729454 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729467 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729481 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.729496 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.735899 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.735899 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.736007 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.738658 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.741197 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.741852 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.743766 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.743900 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.746060 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.746276 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.746679 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.746891 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.749669 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.750221 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.750477 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.751337 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.751727 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.751760 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.751781 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.751872 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:40.25184032 +0000 UTC m=+22.300761225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.751994 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.752049 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.752056 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.752227 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.752076 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.752321 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.753724 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.754103 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:40.254062814 +0000 UTC m=+22.302983859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.754514 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.754969 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.756409 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.762541 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.770219 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.771273 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.772832 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.780930 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.785904 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.787934 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.799855 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.812517 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.817429 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.818332 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830264 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830362 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830434 5024 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830433 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830453 5024 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830517 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830543 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830565 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830584 5024 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830603 5024 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830616 5024 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830628 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830640 5024 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830651 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830662 5024 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830673 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830687 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830700 5024 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830711 5024 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830722 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830736 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830749 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830760 5024 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830770 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830782 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830792 5024 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.830804 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.833332 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.848957 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.860928 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.872387 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.896528 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.912411 5024 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.912502 5024 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.913678 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.914004 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.914051 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.914062 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.914077 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.914088 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:39Z","lastTransitionTime":"2025-11-28T16:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.947336 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.947858 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.955125 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.955185 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.955198 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.955219 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.955233 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:39Z","lastTransitionTime":"2025-11-28T16:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:39 crc kubenswrapper[5024]: E1128 16:58:39.966746 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.968885 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.973235 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.973284 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.973293 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.973319 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.973331 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:39Z","lastTransitionTime":"2025-11-28T16:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:39 crc kubenswrapper[5024]: I1128 16:58:39.978577 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.020121 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.022408 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.024291 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.024312 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.024320 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.024338 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.024349 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:40Z","lastTransitionTime":"2025-11-28T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.037751 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.044001 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.048527 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.048569 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.048580 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.048595 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.048605 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:40Z","lastTransitionTime":"2025-11-28T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.054781 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.064169 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.064329 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.069107 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.069148 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.069158 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.069176 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.069189 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:40Z","lastTransitionTime":"2025-11-28T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.071584 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.107997 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:40 crc kubenswrapper[5024]: W1128 16:58:40.119890 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c8475814e821fd77e021d368f90b24060cb4d2b65c6a2835678f8b2e8bc459c7 WatchSource:0}: Error finding container c8475814e821fd77e021d368f90b24060cb4d2b65c6a2835678f8b2e8bc459c7: Status 404 returned error can't find the container with id c8475814e821fd77e021d368f90b24060cb4d2b65c6a2835678f8b2e8bc459c7 Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.125704 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.127335 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-7lvcw"] Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.127693 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7lvcw" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.130131 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.130640 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.131159 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.135444 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.135644 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 16:58:41.135622314 +0000 UTC m=+23.184543219 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.175131 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.175220 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.175238 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.175262 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.175278 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:40Z","lastTransitionTime":"2025-11-28T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.186333 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.211255 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.239870 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.239918 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.239968 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dc4fee7b-b7f6-48fc-98a4-4b360515a817-hosts-file\") pod \"node-resolver-7lvcw\" (UID: \"dc4fee7b-b7f6-48fc-98a4-4b360515a817\") " pod="openshift-dns/node-resolver-7lvcw" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.239988 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt5bx\" (UniqueName: \"kubernetes.io/projected/dc4fee7b-b7f6-48fc-98a4-4b360515a817-kube-api-access-nt5bx\") pod \"node-resolver-7lvcw\" (UID: \"dc4fee7b-b7f6-48fc-98a4-4b360515a817\") " pod="openshift-dns/node-resolver-7lvcw" Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.240055 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.240168 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:41.240144754 +0000 UTC m=+23.289065659 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.240240 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.240367 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:41.24034031 +0000 UTC m=+23.289261435 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.244272 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.278823 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.278884 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.278900 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.278940 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.278962 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:40Z","lastTransitionTime":"2025-11-28T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.341281 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.341541 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.341638 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dc4fee7b-b7f6-48fc-98a4-4b360515a817-hosts-file\") pod \"node-resolver-7lvcw\" (UID: \"dc4fee7b-b7f6-48fc-98a4-4b360515a817\") " pod="openshift-dns/node-resolver-7lvcw" Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.341549 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.341760 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.341848 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.341769 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt5bx\" (UniqueName: \"kubernetes.io/projected/dc4fee7b-b7f6-48fc-98a4-4b360515a817-kube-api-access-nt5bx\") pod \"node-resolver-7lvcw\" (UID: \"dc4fee7b-b7f6-48fc-98a4-4b360515a817\") " pod="openshift-dns/node-resolver-7lvcw" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.341852 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dc4fee7b-b7f6-48fc-98a4-4b360515a817-hosts-file\") pod \"node-resolver-7lvcw\" (UID: \"dc4fee7b-b7f6-48fc-98a4-4b360515a817\") " pod="openshift-dns/node-resolver-7lvcw" Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.341774 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.342007 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.341940 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:41.341919255 +0000 UTC m=+23.390840150 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.342038 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:40 crc kubenswrapper[5024]: E1128 16:58:40.342121 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:41.34210107 +0000 UTC m=+23.391021975 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.353396 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.372830 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt5bx\" (UniqueName: \"kubernetes.io/projected/dc4fee7b-b7f6-48fc-98a4-4b360515a817-kube-api-access-nt5bx\") pod \"node-resolver-7lvcw\" (UID: \"dc4fee7b-b7f6-48fc-98a4-4b360515a817\") " pod="openshift-dns/node-resolver-7lvcw" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.374910 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.386298 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.386684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.386775 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.386878 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.386986 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:40Z","lastTransitionTime":"2025-11-28T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.398253 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.409226 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.420843 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.442471 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7lvcw" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.465042 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.484002 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.501320 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.501365 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.501378 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.501396 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.501410 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:40Z","lastTransitionTime":"2025-11-28T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.503046 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.504012 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.505442 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.506097 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.507466 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.508112 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.508878 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.509917 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.510513 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.512242 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.512989 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.514275 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.514872 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.515687 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.516789 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.517644 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.518878 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.519957 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.520694 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.522005 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.522611 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.524133 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.524708 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.526157 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.526648 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.527492 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.529389 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.530224 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.530952 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.532349 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.533202 5024 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.533328 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.536142 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.537447 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.538000 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.540225 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.541592 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.542264 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.543415 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.544460 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.545791 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.546462 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.547447 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.548081 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.548995 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.549725 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.551273 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.552265 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.553349 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.553999 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.555283 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.556192 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.556845 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.558132 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.605498 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.605576 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.605599 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.605636 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.605659 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:40Z","lastTransitionTime":"2025-11-28T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.645031 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ps8mf"] Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.645597 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ttb72"] Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.645857 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.647225 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4vh86"] Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.647599 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.648075 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.648582 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b2gbm"] Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.652073 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.652113 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.652807 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.652601 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.652672 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.652743 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.652799 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.652890 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.653044 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.656050 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.656596 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.669238 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.669697 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.669703 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.669856 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.670486 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.670697 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.670974 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.671093 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.671265 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.674000 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.680997 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7lvcw" event={"ID":"dc4fee7b-b7f6-48fc-98a4-4b360515a817","Type":"ContainerStarted","Data":"640f220e9652b1a4e1bdbc0bb1684d54f5112d371d760f960dbdd9a1b3a228f1"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.683066 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.683122 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c8475814e821fd77e021d368f90b24060cb4d2b65c6a2835678f8b2e8bc459c7"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.685461 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.685547 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.685566 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7891739f5efe3848a8f469046716eeff9acf58af59fabf2a4b7c92219e77608e"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.688246 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.690365 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.690738 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.691345 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e933df99225e26cf8c1956d7f3cdcc2109bca5d6893891d79bc1136d60395086"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.697339 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.709000 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.709070 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.709081 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.709102 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.709116 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:40Z","lastTransitionTime":"2025-11-28T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.717994 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.745391 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-etc-kubernetes\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.745493 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-systemd-units\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.745548 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-etc-openvswitch\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.745625 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-var-lib-cni-bin\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.745663 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-run-k8s-cni-cncf-io\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.745712 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-var-lib-kubelet\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.745752 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-kubelet\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.745788 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-multus-cni-dir\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.745821 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-os-release\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.745867 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-system-cni-dir\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.745903 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-multus-socket-dir-parent\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.745941 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/afb0c264-2fb7-436d-9afa-07e208efebd2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.745982 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2czsc\" (UniqueName: \"kubernetes.io/projected/afb0c264-2fb7-436d-9afa-07e208efebd2-kube-api-access-2czsc\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746045 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovnkube-config\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746091 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/77bf51a4-547d-4a7b-b841-59f4fbacbd97-rootfs\") pod \"machine-config-daemon-ps8mf\" (UID: \"77bf51a4-547d-4a7b-b841-59f4fbacbd97\") " pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746126 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-var-lib-openvswitch\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746175 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-multus-conf-dir\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746217 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-openvswitch\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746258 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovnkube-script-lib\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746297 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/77bf51a4-547d-4a7b-b841-59f4fbacbd97-mcd-auth-proxy-config\") pod \"machine-config-daemon-ps8mf\" (UID: \"77bf51a4-547d-4a7b-b841-59f4fbacbd97\") " pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746354 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-env-overrides\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746379 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-run-netns\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746401 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-run-multus-certs\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746443 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/afb0c264-2fb7-436d-9afa-07e208efebd2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746483 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-log-socket\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746544 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/77bf51a4-547d-4a7b-b841-59f4fbacbd97-proxy-tls\") pod \"machine-config-daemon-ps8mf\" (UID: \"77bf51a4-547d-4a7b-b841-59f4fbacbd97\") " pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746583 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-run-netns\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746770 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97cac632-c692-414d-b0cf-605f0bb7719b-cni-binary-copy\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.746879 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/afb0c264-2fb7-436d-9afa-07e208efebd2-cnibin\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.747080 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/97cac632-c692-414d-b0cf-605f0bb7719b-multus-daemon-config\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810105 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-slash\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810182 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-cni-netd\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810220 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810264 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvvzd\" (UniqueName: \"kubernetes.io/projected/5b1542ec-e582-404b-8649-4a2a3e6ac398-kube-api-access-lvvzd\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810369 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mwg6\" (UniqueName: \"kubernetes.io/projected/97cac632-c692-414d-b0cf-605f0bb7719b-kube-api-access-5mwg6\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810408 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-ovn\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810449 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-run-ovn-kubernetes\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810521 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-cnibin\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810555 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-hostroot\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810593 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-cni-bin\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810670 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/afb0c264-2fb7-436d-9afa-07e208efebd2-system-cni-dir\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810712 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-systemd\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810757 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-var-lib-cni-multus\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810795 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/afb0c264-2fb7-436d-9afa-07e208efebd2-os-release\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810835 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-node-log\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810869 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovn-node-metrics-cert\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.810912 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc84d\" (UniqueName: \"kubernetes.io/projected/77bf51a4-547d-4a7b-b841-59f4fbacbd97-kube-api-access-sc84d\") pod \"machine-config-daemon-ps8mf\" (UID: \"77bf51a4-547d-4a7b-b841-59f4fbacbd97\") " pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.811055 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/afb0c264-2fb7-436d-9afa-07e208efebd2-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.815364 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.815414 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.815442 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.815481 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.815503 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:40Z","lastTransitionTime":"2025-11-28T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.827455 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:40Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.892045 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:40Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.915443 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:40Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.924486 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/97cac632-c692-414d-b0cf-605f0bb7719b-multus-daemon-config\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.924559 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-slash\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.924606 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-cni-netd\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.924644 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.924700 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvvzd\" (UniqueName: \"kubernetes.io/projected/5b1542ec-e582-404b-8649-4a2a3e6ac398-kube-api-access-lvvzd\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.924745 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mwg6\" (UniqueName: \"kubernetes.io/projected/97cac632-c692-414d-b0cf-605f0bb7719b-kube-api-access-5mwg6\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.924754 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-slash\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.924760 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-cni-netd\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.924780 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-ovn\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.924873 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-run-ovn-kubernetes\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.924869 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-ovn\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.924897 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-run-ovn-kubernetes\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.924956 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-cnibin\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.924979 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-hostroot\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925001 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-cni-bin\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925002 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925072 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-cni-bin\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925093 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-cnibin\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925143 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-hostroot\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925169 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-systemd\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925324 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-var-lib-cni-multus\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925349 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/afb0c264-2fb7-436d-9afa-07e208efebd2-system-cni-dir\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925354 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/97cac632-c692-414d-b0cf-605f0bb7719b-multus-daemon-config\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925386 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-systemd\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925440 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-var-lib-cni-multus\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925450 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-node-log\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925449 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/afb0c264-2fb7-436d-9afa-07e208efebd2-system-cni-dir\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925488 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovn-node-metrics-cert\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925497 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-node-log\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925527 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc84d\" (UniqueName: \"kubernetes.io/projected/77bf51a4-547d-4a7b-b841-59f4fbacbd97-kube-api-access-sc84d\") pod \"machine-config-daemon-ps8mf\" (UID: \"77bf51a4-547d-4a7b-b841-59f4fbacbd97\") " pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925560 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/afb0c264-2fb7-436d-9afa-07e208efebd2-os-release\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925580 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/afb0c264-2fb7-436d-9afa-07e208efebd2-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925618 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-systemd-units\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925639 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-etc-openvswitch\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925661 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-var-lib-cni-bin\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925678 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-etc-kubernetes\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925699 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-var-lib-kubelet\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925716 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-kubelet\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925734 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-multus-cni-dir\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925751 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-os-release\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925777 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-run-k8s-cni-cncf-io\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925793 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-system-cni-dir\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925809 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-multus-socket-dir-parent\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925824 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/afb0c264-2fb7-436d-9afa-07e208efebd2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925840 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2czsc\" (UniqueName: \"kubernetes.io/projected/afb0c264-2fb7-436d-9afa-07e208efebd2-kube-api-access-2czsc\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925861 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovnkube-config\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925887 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/77bf51a4-547d-4a7b-b841-59f4fbacbd97-rootfs\") pod \"machine-config-daemon-ps8mf\" (UID: \"77bf51a4-547d-4a7b-b841-59f4fbacbd97\") " pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925903 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-var-lib-openvswitch\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.925991 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-openvswitch\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926032 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovnkube-script-lib\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926057 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/77bf51a4-547d-4a7b-b841-59f4fbacbd97-mcd-auth-proxy-config\") pod \"machine-config-daemon-ps8mf\" (UID: \"77bf51a4-547d-4a7b-b841-59f4fbacbd97\") " pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926075 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-multus-conf-dir\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926092 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-env-overrides\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926135 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-run-multus-certs\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926156 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/afb0c264-2fb7-436d-9afa-07e208efebd2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926173 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-log-socket\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926178 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-var-lib-cni-bin\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926204 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/77bf51a4-547d-4a7b-b841-59f4fbacbd97-proxy-tls\") pod \"machine-config-daemon-ps8mf\" (UID: \"77bf51a4-547d-4a7b-b841-59f4fbacbd97\") " pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926165 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-run-k8s-cni-cncf-io\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926236 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-run-netns\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926272 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-run-netns\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926301 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97cac632-c692-414d-b0cf-605f0bb7719b-cni-binary-copy\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926307 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/afb0c264-2fb7-436d-9afa-07e208efebd2-os-release\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926335 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/afb0c264-2fb7-436d-9afa-07e208efebd2-cnibin\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926364 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-run-netns\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926458 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-multus-socket-dir-parent\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926459 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-run-netns\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926492 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-etc-kubernetes\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926534 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-var-lib-kubelet\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926565 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-kubelet\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926825 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-multus-cni-dir\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926900 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-os-release\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926940 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/afb0c264-2fb7-436d-9afa-07e208efebd2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926990 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/afb0c264-2fb7-436d-9afa-07e208efebd2-cnibin\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.926318 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-system-cni-dir\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.927585 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/afb0c264-2fb7-436d-9afa-07e208efebd2-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.927622 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97cac632-c692-414d-b0cf-605f0bb7719b-cni-binary-copy\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.927692 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-multus-conf-dir\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.927694 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-systemd-units\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.927739 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/97cac632-c692-414d-b0cf-605f0bb7719b-host-run-multus-certs\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.927925 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/77bf51a4-547d-4a7b-b841-59f4fbacbd97-mcd-auth-proxy-config\") pod \"machine-config-daemon-ps8mf\" (UID: \"77bf51a4-547d-4a7b-b841-59f4fbacbd97\") " pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.928341 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/afb0c264-2fb7-436d-9afa-07e208efebd2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.928390 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-log-socket\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.928588 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-env-overrides\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.928678 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-etc-openvswitch\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.928757 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-var-lib-openvswitch\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.928820 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/77bf51a4-547d-4a7b-b841-59f4fbacbd97-rootfs\") pod \"machine-config-daemon-ps8mf\" (UID: \"77bf51a4-547d-4a7b-b841-59f4fbacbd97\") " pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.928877 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-openvswitch\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.929100 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovnkube-config\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.930046 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovnkube-script-lib\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.933806 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovn-node-metrics-cert\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.934136 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/77bf51a4-547d-4a7b-b841-59f4fbacbd97-proxy-tls\") pod \"machine-config-daemon-ps8mf\" (UID: \"77bf51a4-547d-4a7b-b841-59f4fbacbd97\") " pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.935299 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.935357 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.935377 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.935406 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.935422 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:40Z","lastTransitionTime":"2025-11-28T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.942393 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:40Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.951743 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvvzd\" (UniqueName: \"kubernetes.io/projected/5b1542ec-e582-404b-8649-4a2a3e6ac398-kube-api-access-lvvzd\") pod \"ovnkube-node-b2gbm\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.953716 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mwg6\" (UniqueName: \"kubernetes.io/projected/97cac632-c692-414d-b0cf-605f0bb7719b-kube-api-access-5mwg6\") pod \"multus-4vh86\" (UID: \"97cac632-c692-414d-b0cf-605f0bb7719b\") " pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.956050 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc84d\" (UniqueName: \"kubernetes.io/projected/77bf51a4-547d-4a7b-b841-59f4fbacbd97-kube-api-access-sc84d\") pod \"machine-config-daemon-ps8mf\" (UID: \"77bf51a4-547d-4a7b-b841-59f4fbacbd97\") " pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.959841 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2czsc\" (UniqueName: \"kubernetes.io/projected/afb0c264-2fb7-436d-9afa-07e208efebd2-kube-api-access-2czsc\") pod \"multus-additional-cni-plugins-ttb72\" (UID: \"afb0c264-2fb7-436d-9afa-07e208efebd2\") " pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.966741 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:40Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.969904 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.983512 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4vh86" Nov 28 16:58:40 crc kubenswrapper[5024]: I1128 16:58:40.984358 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:40Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.003102 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:40Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.014456 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ttb72" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.020618 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.026174 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.039528 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.039579 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.039593 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.039614 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.039629 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:41Z","lastTransitionTime":"2025-11-28T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.051415 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: W1128 16:58:41.066129 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafb0c264_2fb7_436d_9afa_07e208efebd2.slice/crio-8893ef8f00bcea3efc2ea134a7ea826a32fdb0af1ce650836f00d0020b0523f5 WatchSource:0}: Error finding container 8893ef8f00bcea3efc2ea134a7ea826a32fdb0af1ce650836f00d0020b0523f5: Status 404 returned error can't find the container with id 8893ef8f00bcea3efc2ea134a7ea826a32fdb0af1ce650836f00d0020b0523f5 Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.073581 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.093419 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.108632 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.130616 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.147194 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.147233 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.147242 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.147259 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.147271 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:41Z","lastTransitionTime":"2025-11-28T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.155287 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.182595 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.208410 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.222859 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.229591 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.229843 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 16:58:43.229800535 +0000 UTC m=+25.278721450 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.237438 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.250319 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.250372 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.250387 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.250408 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.250422 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:41Z","lastTransitionTime":"2025-11-28T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.252273 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.265197 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.330455 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.330493 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.330558 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.330596 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.330652 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:43.330632719 +0000 UTC m=+25.379553624 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.330668 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:43.33066236 +0000 UTC m=+25.379583255 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.353269 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.353300 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.353308 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.353321 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.353331 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:41Z","lastTransitionTime":"2025-11-28T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.431645 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.431734 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.431907 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.431927 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.431941 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.431996 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:43.431976287 +0000 UTC m=+25.480897192 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.432446 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.432498 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.432515 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.432596 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:43.432571894 +0000 UTC m=+25.481492839 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.456582 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.456633 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.456644 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.456663 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.456676 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:41Z","lastTransitionTime":"2025-11-28T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.497563 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.497772 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.498268 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.498355 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.498418 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:41 crc kubenswrapper[5024]: E1128 16:58:41.498490 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.558970 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.559006 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.559029 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.559046 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.559056 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:41Z","lastTransitionTime":"2025-11-28T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.661835 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.661902 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.661916 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.661939 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.661953 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:41Z","lastTransitionTime":"2025-11-28T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.695100 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" event={"ID":"afb0c264-2fb7-436d-9afa-07e208efebd2","Type":"ContainerStarted","Data":"ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.695360 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" event={"ID":"afb0c264-2fb7-436d-9afa-07e208efebd2","Type":"ContainerStarted","Data":"8893ef8f00bcea3efc2ea134a7ea826a32fdb0af1ce650836f00d0020b0523f5"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.697296 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4vh86" event={"ID":"97cac632-c692-414d-b0cf-605f0bb7719b","Type":"ContainerStarted","Data":"a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.697332 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4vh86" event={"ID":"97cac632-c692-414d-b0cf-605f0bb7719b","Type":"ContainerStarted","Data":"69b450cb8fdfb307500a083ecbaea53e1d2cad8866db97a0eab552bc149f4f84"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.698587 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7lvcw" event={"ID":"dc4fee7b-b7f6-48fc-98a4-4b360515a817","Type":"ContainerStarted","Data":"9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.700939 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.700970 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.700984 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"4b1bbbab9b718d60fa2a38d7812fd5da3c087ea1a6fe10aa0bffe47f2573fb37"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.706045 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerID="c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78" exitCode=0 Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.706732 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerDied","Data":"c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.706768 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerStarted","Data":"f4f82891a69ca3b29fdf2bf20318848ba35c6f27fca9f6352aaa055aaea660e0"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.724106 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.748633 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.763502 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.765452 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.765500 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.765511 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.765530 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.765543 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:41Z","lastTransitionTime":"2025-11-28T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.796342 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.820850 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.850477 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.867100 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.867944 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.867970 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.867980 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.867995 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.868006 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:41Z","lastTransitionTime":"2025-11-28T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.916710 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.957442 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.971413 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.971471 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.971483 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.971499 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.971510 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:41Z","lastTransitionTime":"2025-11-28T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.972617 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:41 crc kubenswrapper[5024]: I1128 16:58:41.988687 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.002575 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.013850 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.029664 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.046661 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.062737 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.074809 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.074858 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.074870 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.074891 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.074901 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:42Z","lastTransitionTime":"2025-11-28T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.095386 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.190159 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.192636 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.192663 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.192675 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.192694 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.192708 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:42Z","lastTransitionTime":"2025-11-28T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.218432 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.254730 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.270667 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.286480 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.300852 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.300897 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.300909 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.300943 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.300959 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:42Z","lastTransitionTime":"2025-11-28T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.304751 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.306500 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-rcqbr"] Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.306883 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rcqbr" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.309955 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.310953 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.311197 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.311412 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.329198 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.368882 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d1b656d6-b82b-43ff-ad36-f9ed63e26031-host\") pod \"node-ca-rcqbr\" (UID: \"d1b656d6-b82b-43ff-ad36-f9ed63e26031\") " pod="openshift-image-registry/node-ca-rcqbr" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.368950 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d1b656d6-b82b-43ff-ad36-f9ed63e26031-serviceca\") pod \"node-ca-rcqbr\" (UID: \"d1b656d6-b82b-43ff-ad36-f9ed63e26031\") " pod="openshift-image-registry/node-ca-rcqbr" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.368972 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p5gw\" (UniqueName: \"kubernetes.io/projected/d1b656d6-b82b-43ff-ad36-f9ed63e26031-kube-api-access-9p5gw\") pod \"node-ca-rcqbr\" (UID: \"d1b656d6-b82b-43ff-ad36-f9ed63e26031\") " pod="openshift-image-registry/node-ca-rcqbr" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.371223 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.389566 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.403935 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.403974 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.403984 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.404000 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.404009 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:42Z","lastTransitionTime":"2025-11-28T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.432718 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.450779 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.463508 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.469694 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d1b656d6-b82b-43ff-ad36-f9ed63e26031-host\") pod \"node-ca-rcqbr\" (UID: \"d1b656d6-b82b-43ff-ad36-f9ed63e26031\") " pod="openshift-image-registry/node-ca-rcqbr" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.469740 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d1b656d6-b82b-43ff-ad36-f9ed63e26031-serviceca\") pod \"node-ca-rcqbr\" (UID: \"d1b656d6-b82b-43ff-ad36-f9ed63e26031\") " pod="openshift-image-registry/node-ca-rcqbr" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.469760 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p5gw\" (UniqueName: \"kubernetes.io/projected/d1b656d6-b82b-43ff-ad36-f9ed63e26031-kube-api-access-9p5gw\") pod \"node-ca-rcqbr\" (UID: \"d1b656d6-b82b-43ff-ad36-f9ed63e26031\") " pod="openshift-image-registry/node-ca-rcqbr" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.470192 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d1b656d6-b82b-43ff-ad36-f9ed63e26031-host\") pod \"node-ca-rcqbr\" (UID: \"d1b656d6-b82b-43ff-ad36-f9ed63e26031\") " pod="openshift-image-registry/node-ca-rcqbr" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.471573 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d1b656d6-b82b-43ff-ad36-f9ed63e26031-serviceca\") pod \"node-ca-rcqbr\" (UID: \"d1b656d6-b82b-43ff-ad36-f9ed63e26031\") " pod="openshift-image-registry/node-ca-rcqbr" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.479932 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.487566 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p5gw\" (UniqueName: \"kubernetes.io/projected/d1b656d6-b82b-43ff-ad36-f9ed63e26031-kube-api-access-9p5gw\") pod \"node-ca-rcqbr\" (UID: \"d1b656d6-b82b-43ff-ad36-f9ed63e26031\") " pod="openshift-image-registry/node-ca-rcqbr" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.495235 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.507322 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.508162 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.508278 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.508427 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.508567 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:42Z","lastTransitionTime":"2025-11-28T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.512941 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.537516 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.576679 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.594335 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.657287 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.657355 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.657370 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.657388 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.657400 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:42Z","lastTransitionTime":"2025-11-28T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.679521 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.693912 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rcqbr" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.701809 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.721390 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.742909 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.759720 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.759771 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.759781 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.759799 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.759813 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:42Z","lastTransitionTime":"2025-11-28T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.760968 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.780501 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerStarted","Data":"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323"} Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.780554 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerStarted","Data":"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8"} Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.780565 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerStarted","Data":"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a"} Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.782885 5024 generic.go:334] "Generic (PLEG): container finished" podID="afb0c264-2fb7-436d-9afa-07e208efebd2" containerID="ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1" exitCode=0 Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.783612 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" event={"ID":"afb0c264-2fb7-436d-9afa-07e208efebd2","Type":"ContainerDied","Data":"ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1"} Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.807126 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.823150 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.842477 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.865817 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.869404 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.869458 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.869472 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.869492 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.869502 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:42Z","lastTransitionTime":"2025-11-28T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.882555 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.897551 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.919065 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.933744 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.976472 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.979552 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.979583 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.979592 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.979609 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.979619 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:42Z","lastTransitionTime":"2025-11-28T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:42 crc kubenswrapper[5024]: I1128 16:58:42.994661 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.010550 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.025044 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.038633 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.052579 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.081849 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.081882 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.081890 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.081915 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.081924 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:43Z","lastTransitionTime":"2025-11-28T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.186192 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.186222 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.186232 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.186251 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.186262 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:43Z","lastTransitionTime":"2025-11-28T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.282997 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.283450 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 16:58:47.283408762 +0000 UTC m=+29.332329677 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.289164 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.289198 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.289210 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.289228 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.289243 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:43Z","lastTransitionTime":"2025-11-28T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.384179 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.384226 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.384361 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.384398 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.384434 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:47.384417131 +0000 UTC m=+29.433338036 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.384528 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:47.384502644 +0000 UTC m=+29.433423689 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.392693 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.392744 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.392757 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.392778 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.392792 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:43Z","lastTransitionTime":"2025-11-28T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.485095 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.485156 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.485289 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.485306 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.485317 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.485366 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:47.485349978 +0000 UTC m=+29.534270883 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.485749 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.485769 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.485777 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.485803 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:47.485795251 +0000 UTC m=+29.534716156 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.496285 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.496349 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.496364 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.496394 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.496414 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:43Z","lastTransitionTime":"2025-11-28T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.497696 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.497978 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.498099 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.498190 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.498284 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:43 crc kubenswrapper[5024]: E1128 16:58:43.498399 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.663505 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.663540 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.663550 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.663567 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.663577 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:43Z","lastTransitionTime":"2025-11-28T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.790795 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.791232 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.791245 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.791264 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.791275 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:43Z","lastTransitionTime":"2025-11-28T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.792822 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.801544 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerStarted","Data":"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.801605 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerStarted","Data":"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.801617 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerStarted","Data":"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.804782 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" event={"ID":"afb0c264-2fb7-436d-9afa-07e208efebd2","Type":"ContainerStarted","Data":"2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.806120 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rcqbr" event={"ID":"d1b656d6-b82b-43ff-ad36-f9ed63e26031","Type":"ContainerStarted","Data":"e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.806154 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rcqbr" event={"ID":"d1b656d6-b82b-43ff-ad36-f9ed63e26031","Type":"ContainerStarted","Data":"8f0dfaac0f7aa362e563fbba06a858f86be301197e8e14cf6086a5901776ed72"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.812655 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.829701 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.848454 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.861957 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.877142 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.893460 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.893507 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.893515 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.893571 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.893586 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:43Z","lastTransitionTime":"2025-11-28T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.897338 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.907915 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.921786 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.936224 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.951052 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.965673 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.980368 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.996715 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.996764 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.996776 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.996795 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:43 crc kubenswrapper[5024]: I1128 16:58:43.996813 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:43Z","lastTransitionTime":"2025-11-28T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.007240 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.030959 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.049191 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.065076 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.077385 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.090270 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.099844 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.099886 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.099894 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.099912 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.099922 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:44Z","lastTransitionTime":"2025-11-28T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.106974 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.123797 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.144565 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.158591 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.174096 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.190316 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.201989 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.202049 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.202061 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.202078 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.202091 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:44Z","lastTransitionTime":"2025-11-28T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.205324 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.221404 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.252170 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.262613 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:44Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.304813 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.304846 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.304858 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.304872 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.304883 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:44Z","lastTransitionTime":"2025-11-28T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.407323 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.407370 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.407382 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.407403 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.407415 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:44Z","lastTransitionTime":"2025-11-28T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.509632 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.509676 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.509684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.509700 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.509711 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:44Z","lastTransitionTime":"2025-11-28T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.614276 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.614785 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.614799 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.614825 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.614839 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:44Z","lastTransitionTime":"2025-11-28T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.719307 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.719365 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.719379 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.719406 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.719422 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:44Z","lastTransitionTime":"2025-11-28T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.838006 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.838066 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.838077 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.838093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.838104 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:44Z","lastTransitionTime":"2025-11-28T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.941497 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.941547 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.941557 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.941577 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:44 crc kubenswrapper[5024]: I1128 16:58:44.941589 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:44Z","lastTransitionTime":"2025-11-28T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.044453 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.044482 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.044490 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.044505 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.044515 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:45Z","lastTransitionTime":"2025-11-28T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.147218 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.147268 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.147285 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.147303 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.147312 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:45Z","lastTransitionTime":"2025-11-28T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.250398 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.250455 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.250467 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.250487 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.250503 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:45Z","lastTransitionTime":"2025-11-28T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.352990 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.353045 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.353057 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.353075 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.353106 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:45Z","lastTransitionTime":"2025-11-28T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.455331 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.455387 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.455396 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.455413 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.455423 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:45Z","lastTransitionTime":"2025-11-28T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.497166 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.497250 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.497284 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:45 crc kubenswrapper[5024]: E1128 16:58:45.497393 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:45 crc kubenswrapper[5024]: E1128 16:58:45.497605 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:45 crc kubenswrapper[5024]: E1128 16:58:45.497823 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.558626 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.558678 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.558691 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.558725 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.558738 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:45Z","lastTransitionTime":"2025-11-28T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.661355 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.661398 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.661406 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.661425 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.661435 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:45Z","lastTransitionTime":"2025-11-28T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.763657 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.763707 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.763717 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.763735 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.763746 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:45Z","lastTransitionTime":"2025-11-28T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.814828 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerStarted","Data":"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d"} Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.816209 5024 generic.go:334] "Generic (PLEG): container finished" podID="afb0c264-2fb7-436d-9afa-07e208efebd2" containerID="2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224" exitCode=0 Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.816251 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" event={"ID":"afb0c264-2fb7-436d-9afa-07e208efebd2","Type":"ContainerDied","Data":"2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224"} Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.833882 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.850054 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.865749 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.865790 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.865800 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.865817 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.865829 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:45Z","lastTransitionTime":"2025-11-28T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.873535 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.889062 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.903494 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.918779 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.931775 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.947537 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.963659 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.971239 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.971293 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.971304 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.971323 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.971336 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:45Z","lastTransitionTime":"2025-11-28T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:45 crc kubenswrapper[5024]: I1128 16:58:45.977234 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.006930 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.020937 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.039616 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.060271 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.073197 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.073233 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.073242 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.073259 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.073272 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:46Z","lastTransitionTime":"2025-11-28T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.175602 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.175654 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.175664 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.175680 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.175691 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:46Z","lastTransitionTime":"2025-11-28T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.277853 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.277890 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.277900 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.277919 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.277931 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:46Z","lastTransitionTime":"2025-11-28T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.380662 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.380725 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.380734 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.380751 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.380764 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:46Z","lastTransitionTime":"2025-11-28T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.482597 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.482634 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.482642 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.482657 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.482667 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:46Z","lastTransitionTime":"2025-11-28T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.585381 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.585418 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.585429 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.585447 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.585460 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:46Z","lastTransitionTime":"2025-11-28T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.688043 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.688095 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.688105 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.688126 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.688136 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:46Z","lastTransitionTime":"2025-11-28T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.791497 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.791542 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.791553 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.791570 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.791582 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:46Z","lastTransitionTime":"2025-11-28T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.822826 5024 generic.go:334] "Generic (PLEG): container finished" podID="afb0c264-2fb7-436d-9afa-07e208efebd2" containerID="df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e" exitCode=0 Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.822905 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" event={"ID":"afb0c264-2fb7-436d-9afa-07e208efebd2","Type":"ContainerDied","Data":"df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e"} Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.849096 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.871444 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.891965 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.894478 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.894552 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.894564 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.894585 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.894597 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:46Z","lastTransitionTime":"2025-11-28T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.910990 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.926033 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.941180 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.955144 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.970329 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.987100 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.997388 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.997431 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.997455 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.997487 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:46 crc kubenswrapper[5024]: I1128 16:58:46.997507 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:46Z","lastTransitionTime":"2025-11-28T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.002404 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.016481 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.035987 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.052142 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.067082 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.101214 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.101387 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.101445 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.101477 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.101530 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:47Z","lastTransitionTime":"2025-11-28T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.203937 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.204048 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.204068 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.204088 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.204100 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:47Z","lastTransitionTime":"2025-11-28T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.307320 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.307386 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.307400 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.307422 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.307439 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:47Z","lastTransitionTime":"2025-11-28T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.383716 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.384078 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 16:58:55.384050626 +0000 UTC m=+37.432971541 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.409811 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.409860 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.409875 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.409892 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.409904 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:47Z","lastTransitionTime":"2025-11-28T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.485040 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.485121 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.485167 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.485272 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:55.48524962 +0000 UTC m=+37.534170525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.485378 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.485502 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:55.485482497 +0000 UTC m=+37.534403402 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.496907 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.496933 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.496977 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.497072 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.497201 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.497378 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.512176 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.512225 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.512252 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.512282 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.512299 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:47Z","lastTransitionTime":"2025-11-28T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.599965 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.600061 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.600205 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.600224 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.600237 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.600288 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:55.600269901 +0000 UTC m=+37.649190806 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.600298 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.600402 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.600433 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:47 crc kubenswrapper[5024]: E1128 16:58:47.600576 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:55.600536759 +0000 UTC m=+37.649457814 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.615182 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.615236 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.615250 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.615272 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.615291 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:47Z","lastTransitionTime":"2025-11-28T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.719318 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.719372 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.719387 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.719410 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.719434 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:47Z","lastTransitionTime":"2025-11-28T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.822493 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.822540 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.822551 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.822565 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.822575 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:47Z","lastTransitionTime":"2025-11-28T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.829199 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerStarted","Data":"74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9"} Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.829446 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.832717 5024 generic.go:334] "Generic (PLEG): container finished" podID="afb0c264-2fb7-436d-9afa-07e208efebd2" containerID="ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2" exitCode=0 Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.832748 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" event={"ID":"afb0c264-2fb7-436d-9afa-07e208efebd2","Type":"ContainerDied","Data":"ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2"} Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.855910 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.862764 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.870248 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.884580 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.901610 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.916722 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.931899 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.931954 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.931965 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.931988 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.931999 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:47Z","lastTransitionTime":"2025-11-28T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.933551 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.948221 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.964761 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.980493 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:47 crc kubenswrapper[5024]: I1128 16:58:47.998673 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.013895 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.026869 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.034487 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.034559 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.034574 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.034594 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.034608 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:48Z","lastTransitionTime":"2025-11-28T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.040745 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.055056 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.068541 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.081864 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.094466 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.108919 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.124409 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.136927 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.136962 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.136970 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.136984 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.136994 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:48Z","lastTransitionTime":"2025-11-28T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.140716 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.153260 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.169587 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.187501 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.206682 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.219765 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.232857 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.238706 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.238753 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.238764 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.238778 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.238789 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:48Z","lastTransitionTime":"2025-11-28T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.248465 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.267937 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.340833 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.340872 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.340881 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.340895 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.340907 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:48Z","lastTransitionTime":"2025-11-28T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.443669 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.443735 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.443750 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.443780 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.443799 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:48Z","lastTransitionTime":"2025-11-28T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.510804 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.525497 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.541347 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.546094 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.546128 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.546137 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.546152 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.546162 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:48Z","lastTransitionTime":"2025-11-28T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.567099 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.591223 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.610465 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.628319 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.640127 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.648916 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.648945 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.648954 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.648970 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.648980 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:48Z","lastTransitionTime":"2025-11-28T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.656289 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.672766 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.684010 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.698952 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.710247 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.724836 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.751690 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.751766 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.751782 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.751800 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.751811 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:48Z","lastTransitionTime":"2025-11-28T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.839338 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" event={"ID":"afb0c264-2fb7-436d-9afa-07e208efebd2","Type":"ContainerStarted","Data":"630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e"} Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.839436 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.840795 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.854615 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.854672 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.854684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.854700 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.854712 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:48Z","lastTransitionTime":"2025-11-28T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.857615 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.863869 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.870876 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.883395 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.898248 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.916236 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.929107 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.946670 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.957524 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.957569 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.957580 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.957598 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.957611 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:48Z","lastTransitionTime":"2025-11-28T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.962732 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.976585 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:48 crc kubenswrapper[5024]: I1128 16:58:48.999061 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.013350 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.025327 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.041330 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.057383 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.070279 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.070339 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.070350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.070367 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.070383 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:49Z","lastTransitionTime":"2025-11-28T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.073767 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.088676 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.102884 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.118670 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.133828 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.148689 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.164111 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.173306 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.173348 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.173357 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.173378 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.173389 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:49Z","lastTransitionTime":"2025-11-28T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.181430 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.194721 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.212605 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.249981 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.263730 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.276423 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.276492 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.276506 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.276523 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.276535 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:49Z","lastTransitionTime":"2025-11-28T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.280348 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.301501 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.382556 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.382608 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.382618 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.382639 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.382652 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:49Z","lastTransitionTime":"2025-11-28T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.486506 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.486571 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.486590 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.486614 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.486627 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:49Z","lastTransitionTime":"2025-11-28T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.496996 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:49 crc kubenswrapper[5024]: E1128 16:58:49.497200 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.497271 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:49 crc kubenswrapper[5024]: E1128 16:58:49.497570 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.497751 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:49 crc kubenswrapper[5024]: E1128 16:58:49.497852 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.679295 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.679348 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.679360 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.679378 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.679389 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:49Z","lastTransitionTime":"2025-11-28T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.782181 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.782216 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.782226 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.782244 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.782255 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:49Z","lastTransitionTime":"2025-11-28T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.843344 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.885356 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.885394 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.885410 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.885431 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:49 crc kubenswrapper[5024]: I1128 16:58:49.885446 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:49Z","lastTransitionTime":"2025-11-28T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.002934 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.002974 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.002989 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.003005 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.003015 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.105805 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.105855 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.105874 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.105899 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.105917 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.209012 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.209080 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.209092 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.209111 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.209122 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.311988 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.312069 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.312084 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.312104 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.312118 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.330356 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.330441 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.330466 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.330501 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.330526 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: E1128 16:58:50.352124 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.358238 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.358287 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.358305 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.358330 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.358347 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: E1128 16:58:50.388400 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.395830 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.395918 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.395941 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.395982 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.396055 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.419745 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:50 crc kubenswrapper[5024]: E1128 16:58:50.432499 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.443167 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.443210 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.443219 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.443236 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.443248 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.451123 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: E1128 16:58:50.458703 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.466161 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.466215 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.466223 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.466239 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.466251 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.470260 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: E1128 16:58:50.478926 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: E1128 16:58:50.479141 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.481361 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.481403 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.481415 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.481436 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.481451 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.485623 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.500969 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.521629 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.538124 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.559429 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.574567 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.584344 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.584410 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.584425 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.584447 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.584460 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.590619 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.607433 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.624075 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.640044 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.657808 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.675355 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.687361 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.687401 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.687412 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.687431 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.687446 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.790184 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.790229 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.790240 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.790261 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.790272 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.892825 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.892898 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.892910 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.892930 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.892940 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.965287 5024 generic.go:334] "Generic (PLEG): container finished" podID="afb0c264-2fb7-436d-9afa-07e208efebd2" containerID="630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e" exitCode=0 Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.965487 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.966498 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" event={"ID":"afb0c264-2fb7-436d-9afa-07e208efebd2","Type":"ContainerDied","Data":"630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e"} Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.993319 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.994753 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.994803 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.994829 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.994848 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:50 crc kubenswrapper[5024]: I1128 16:58:50.994857 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:50Z","lastTransitionTime":"2025-11-28T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.008604 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.024283 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.048511 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.066877 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.087891 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.099658 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.099700 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.099716 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.099740 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.099757 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:51Z","lastTransitionTime":"2025-11-28T16:58:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.114879 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.129630 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.150213 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.167790 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.182467 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.202086 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.202139 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.202150 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.202169 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.202182 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:51Z","lastTransitionTime":"2025-11-28T16:58:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.203655 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.219245 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.233178 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.304887 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.304947 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.304957 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.304976 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.304998 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:51Z","lastTransitionTime":"2025-11-28T16:58:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.408458 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.408507 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.408517 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.408536 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.408548 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:51Z","lastTransitionTime":"2025-11-28T16:58:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.580044 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.580119 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.580179 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:51 crc kubenswrapper[5024]: E1128 16:58:51.580259 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:51 crc kubenswrapper[5024]: E1128 16:58:51.580444 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:51 crc kubenswrapper[5024]: E1128 16:58:51.580994 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.590744 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.590804 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.590822 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.590895 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.590915 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:51Z","lastTransitionTime":"2025-11-28T16:58:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.694354 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.694385 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.694394 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.694408 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.694419 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:51Z","lastTransitionTime":"2025-11-28T16:58:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.797086 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.797142 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.797153 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.797171 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.797203 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:51Z","lastTransitionTime":"2025-11-28T16:58:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.899705 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.899770 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.899786 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.899806 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.900171 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:51Z","lastTransitionTime":"2025-11-28T16:58:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.973078 5024 generic.go:334] "Generic (PLEG): container finished" podID="afb0c264-2fb7-436d-9afa-07e208efebd2" containerID="1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac" exitCode=0 Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.973133 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" event={"ID":"afb0c264-2fb7-436d-9afa-07e208efebd2","Type":"ContainerDied","Data":"1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac"} Nov 28 16:58:51 crc kubenswrapper[5024]: I1128 16:58:51.990436 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.002480 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.002513 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.002524 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.002558 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.002572 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.007162 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.019911 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.031889 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.063979 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.076874 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.091475 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.105934 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.105988 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.106000 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.106034 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.106057 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.112653 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.129528 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.145353 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.164526 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.211548 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.213593 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.213626 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.213635 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.213651 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.213663 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.249497 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.268336 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.317227 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.317267 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.317281 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.317303 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.317315 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.420411 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.420469 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.420481 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.420499 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.420509 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.523254 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.523644 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.523655 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.523671 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.523681 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.626650 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.626694 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.626705 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.626725 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.626737 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.730167 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.730669 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.730785 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.730944 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.731589 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.837660 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.837747 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.837771 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.837806 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.837831 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.941666 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.941756 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.941781 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.941814 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[5024]: I1128 16:58:52.941844 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.044960 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.045044 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.045264 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.045546 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.045580 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.148608 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.148657 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.148670 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.148691 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.148703 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.251154 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.251216 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.251228 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.251253 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.251265 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.354530 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.354603 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.354616 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.354636 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.354651 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.458358 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.458404 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.458415 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.458433 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.458446 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.497874 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:53 crc kubenswrapper[5024]: E1128 16:58:53.498051 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.498383 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.498485 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:53 crc kubenswrapper[5024]: E1128 16:58:53.498529 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:53 crc kubenswrapper[5024]: E1128 16:58:53.498686 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.533273 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g"] Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.534475 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.537278 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.538549 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.553948 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.561566 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.561635 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.561646 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.561664 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.561676 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.571152 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.585331 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.599446 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.614103 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.629634 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.647397 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.664472 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.664512 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.664522 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.664540 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.664552 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.668785 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.683188 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.696146 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.708259 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fda6719c-a2bb-4a93-bafe-3118fb33bb19-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h4h4g\" (UID: \"fda6719c-a2bb-4a93-bafe-3118fb33bb19\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.708348 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fda6719c-a2bb-4a93-bafe-3118fb33bb19-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h4h4g\" (UID: \"fda6719c-a2bb-4a93-bafe-3118fb33bb19\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.708552 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c2xr\" (UniqueName: \"kubernetes.io/projected/fda6719c-a2bb-4a93-bafe-3118fb33bb19-kube-api-access-4c2xr\") pod \"ovnkube-control-plane-749d76644c-h4h4g\" (UID: \"fda6719c-a2bb-4a93-bafe-3118fb33bb19\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.708659 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fda6719c-a2bb-4a93-bafe-3118fb33bb19-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h4h4g\" (UID: \"fda6719c-a2bb-4a93-bafe-3118fb33bb19\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.708770 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.723649 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.739897 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.757162 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.767429 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.767474 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.767487 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.767506 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.767533 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.769831 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.810255 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fda6719c-a2bb-4a93-bafe-3118fb33bb19-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h4h4g\" (UID: \"fda6719c-a2bb-4a93-bafe-3118fb33bb19\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.810339 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fda6719c-a2bb-4a93-bafe-3118fb33bb19-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h4h4g\" (UID: \"fda6719c-a2bb-4a93-bafe-3118fb33bb19\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.810379 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c2xr\" (UniqueName: \"kubernetes.io/projected/fda6719c-a2bb-4a93-bafe-3118fb33bb19-kube-api-access-4c2xr\") pod \"ovnkube-control-plane-749d76644c-h4h4g\" (UID: \"fda6719c-a2bb-4a93-bafe-3118fb33bb19\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.810425 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fda6719c-a2bb-4a93-bafe-3118fb33bb19-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h4h4g\" (UID: \"fda6719c-a2bb-4a93-bafe-3118fb33bb19\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.811359 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fda6719c-a2bb-4a93-bafe-3118fb33bb19-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-h4h4g\" (UID: \"fda6719c-a2bb-4a93-bafe-3118fb33bb19\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.811433 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fda6719c-a2bb-4a93-bafe-3118fb33bb19-env-overrides\") pod \"ovnkube-control-plane-749d76644c-h4h4g\" (UID: \"fda6719c-a2bb-4a93-bafe-3118fb33bb19\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.817578 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fda6719c-a2bb-4a93-bafe-3118fb33bb19-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-h4h4g\" (UID: \"fda6719c-a2bb-4a93-bafe-3118fb33bb19\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.828791 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c2xr\" (UniqueName: \"kubernetes.io/projected/fda6719c-a2bb-4a93-bafe-3118fb33bb19-kube-api-access-4c2xr\") pod \"ovnkube-control-plane-749d76644c-h4h4g\" (UID: \"fda6719c-a2bb-4a93-bafe-3118fb33bb19\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.848717 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.869497 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.869708 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.869846 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.869984 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.870143 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.974728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.974777 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.974788 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.974815 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.974825 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[5024]: I1128 16:58:53.982161 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" event={"ID":"fda6719c-a2bb-4a93-bafe-3118fb33bb19","Type":"ContainerStarted","Data":"1be5baea0c3656ad183586aaebd7dd9edf5a404c4fafd61edd73a7b1ff4dd96e"} Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.077882 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.077976 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.077991 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.078043 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.078072 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.184280 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.184355 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.184375 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.184402 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.184417 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.287429 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.287486 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.287498 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.287519 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.287536 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.390387 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.390431 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.390440 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.390454 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.390464 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.492868 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.492925 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.492937 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.492956 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.492972 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.753686 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.753761 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.753776 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.753792 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.753802 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.856878 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.856931 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.856947 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.856983 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.856999 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.959764 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.960331 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.960351 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.960399 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.960412 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.988375 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" event={"ID":"fda6719c-a2bb-4a93-bafe-3118fb33bb19","Type":"ContainerStarted","Data":"011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e"} Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.988454 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" event={"ID":"fda6719c-a2bb-4a93-bafe-3118fb33bb19","Type":"ContainerStarted","Data":"007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94"} Nov 28 16:58:54 crc kubenswrapper[5024]: I1128 16:58:54.996056 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" event={"ID":"afb0c264-2fb7-436d-9afa-07e208efebd2","Type":"ContainerStarted","Data":"007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0"} Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.006527 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.023252 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.048684 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.063070 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.063168 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.063179 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.063197 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.063208 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.074137 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.087278 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.090204 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5t4kc"] Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.090734 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.090818 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.101250 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.115179 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.128425 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.144779 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.153826 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs\") pod \"network-metrics-daemon-5t4kc\" (UID: \"949e234b-60b0-40e4-a423-0596dafd56c1\") " pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.154087 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwpwz\" (UniqueName: \"kubernetes.io/projected/949e234b-60b0-40e4-a423-0596dafd56c1-kube-api-access-hwpwz\") pod \"network-metrics-daemon-5t4kc\" (UID: \"949e234b-60b0-40e4-a423-0596dafd56c1\") " pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.159173 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.165987 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.166068 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.166081 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.166107 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.166132 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.174663 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.188685 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.201733 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.215832 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.232671 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.252968 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.255450 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs\") pod \"network-metrics-daemon-5t4kc\" (UID: \"949e234b-60b0-40e4-a423-0596dafd56c1\") " pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.255494 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwpwz\" (UniqueName: \"kubernetes.io/projected/949e234b-60b0-40e4-a423-0596dafd56c1-kube-api-access-hwpwz\") pod \"network-metrics-daemon-5t4kc\" (UID: \"949e234b-60b0-40e4-a423-0596dafd56c1\") " pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.255750 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.255885 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs podName:949e234b-60b0-40e4-a423-0596dafd56c1 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:55.755859978 +0000 UTC m=+37.804780883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs") pod "network-metrics-daemon-5t4kc" (UID: "949e234b-60b0-40e4-a423-0596dafd56c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.269634 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.269698 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.269708 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.269743 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.269756 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.270696 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.279491 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwpwz\" (UniqueName: \"kubernetes.io/projected/949e234b-60b0-40e4-a423-0596dafd56c1-kube-api-access-hwpwz\") pod \"network-metrics-daemon-5t4kc\" (UID: \"949e234b-60b0-40e4-a423-0596dafd56c1\") " pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.288095 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.308125 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.323162 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.337795 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.355627 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.371619 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.372469 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.372578 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.372653 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.372735 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.372808 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.386839 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.404056 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.417102 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.430485 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.445709 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.456775 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.457099 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 16:59:11.457075153 +0000 UTC m=+53.505996058 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.457884 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.470912 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.475739 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.475773 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.475783 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.475802 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.475815 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.485252 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.497799 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.497870 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.497799 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.497964 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.498110 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.498217 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.557714 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.557768 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.557819 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.557882 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:11.557865516 +0000 UTC m=+53.606786421 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.557885 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.557923 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:11.557912197 +0000 UTC m=+53.606833102 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.578089 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.578135 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.578144 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.578159 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.578171 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.658661 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.658720 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.658835 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.658857 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.658868 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.658914 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:11.658899055 +0000 UTC m=+53.707819960 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.658835 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.658942 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.658951 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.658978 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:11.658969767 +0000 UTC m=+53.707890672 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.679862 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.679898 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.679908 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.679925 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.679935 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.759423 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs\") pod \"network-metrics-daemon-5t4kc\" (UID: \"949e234b-60b0-40e4-a423-0596dafd56c1\") " pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.759607 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:58:55 crc kubenswrapper[5024]: E1128 16:58:55.759671 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs podName:949e234b-60b0-40e4-a423-0596dafd56c1 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:56.759652547 +0000 UTC m=+38.808573452 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs") pod "network-metrics-daemon-5t4kc" (UID: "949e234b-60b0-40e4-a423-0596dafd56c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.782454 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.782515 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.782523 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.782540 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.782550 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.885360 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.885404 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.885412 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.885434 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.885452 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.988350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.988383 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.988391 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.988404 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[5024]: I1128 16:58:55.988414 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.001688 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/0.log" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.004337 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerID="74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9" exitCode=1 Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.004420 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerDied","Data":"74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9"} Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.005052 5024 scope.go:117] "RemoveContainer" containerID="74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.022610 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.038346 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.051613 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.069609 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.084805 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.090850 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.090890 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.090900 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.090918 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.090928 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.098659 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.114553 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.129156 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.141286 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.156358 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.171856 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.183505 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.192884 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.192908 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.192915 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.192930 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.192940 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.199069 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.219179 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"message\\\":\\\"950305 6216 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 16:58:54.950333 6216 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 16:58:54.951875 6216 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 16:58:54.951935 6216 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 16:58:54.950560 6216 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:58:54.952165 6216 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 16:58:54.950649 6216 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:58:54.952387 6216 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:58:54.950695 6216 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 16:58:54.950802 6216 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 16:58:54.953854 6216 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 16:58:54.953985 6216 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.231014 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.246384 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.295665 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.295723 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.295738 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.295760 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.295777 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.398992 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.399071 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.399091 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.399111 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.399126 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.497255 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:58:56 crc kubenswrapper[5024]: E1128 16:58:56.497416 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.501427 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.501459 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.501468 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.501480 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.501490 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.603761 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.603799 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.603809 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.603825 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.603834 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.706486 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.706538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.706552 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.706574 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.706588 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.785459 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs\") pod \"network-metrics-daemon-5t4kc\" (UID: \"949e234b-60b0-40e4-a423-0596dafd56c1\") " pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:58:56 crc kubenswrapper[5024]: E1128 16:58:56.785594 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:58:56 crc kubenswrapper[5024]: E1128 16:58:56.785665 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs podName:949e234b-60b0-40e4-a423-0596dafd56c1 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:58.785644872 +0000 UTC m=+40.834565787 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs") pod "network-metrics-daemon-5t4kc" (UID: "949e234b-60b0-40e4-a423-0596dafd56c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.810347 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.810423 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.810433 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.810451 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.810462 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.912746 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.912796 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.912805 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.912820 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[5024]: I1128 16:58:56.912831 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.009496 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/0.log" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.011779 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerStarted","Data":"4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d"} Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.011955 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.015068 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.015108 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.015122 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.015141 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.015155 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.032965 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.051400 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.062270 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.078491 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.099838 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"message\\\":\\\"950305 6216 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 16:58:54.950333 6216 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 16:58:54.951875 6216 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 16:58:54.951935 6216 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 16:58:54.950560 6216 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:58:54.952165 6216 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 16:58:54.950649 6216 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:58:54.952387 6216 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:58:54.950695 6216 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 16:58:54.950802 6216 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 16:58:54.953854 6216 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 16:58:54.953985 6216 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.111901 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.117848 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.117901 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.117912 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.117930 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.117942 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.124780 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.140303 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.152192 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.167232 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.182422 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.195692 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.207655 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.220591 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.220639 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.220657 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.220676 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.220693 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.220656 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.236714 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.247401 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.324061 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.324118 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.324135 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.324157 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.324173 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.427224 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.427266 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.427278 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.427295 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.427308 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.497399 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.497399 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:57 crc kubenswrapper[5024]: E1128 16:58:57.497605 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:57 crc kubenswrapper[5024]: E1128 16:58:57.497685 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.497408 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:57 crc kubenswrapper[5024]: E1128 16:58:57.497900 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.529892 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.529935 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.529952 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.529970 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.529982 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.632240 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.632277 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.632288 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.632303 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.632314 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.735361 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.735669 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.735689 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.735712 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.735726 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.838236 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.838282 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.838293 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.838330 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.838344 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.941400 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.941597 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.941613 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.941631 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[5024]: I1128 16:58:57.941641 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.017378 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/1.log" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.018162 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/0.log" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.020914 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerID="4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d" exitCode=1 Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.020964 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerDied","Data":"4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d"} Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.021049 5024 scope.go:117] "RemoveContainer" containerID="74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.021887 5024 scope.go:117] "RemoveContainer" containerID="4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d" Nov 28 16:58:58 crc kubenswrapper[5024]: E1128 16:58:58.022113 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.038652 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.044310 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.044331 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.044339 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.044352 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.044363 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.052609 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.063186 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.075163 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.086527 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.097247 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.110756 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.121337 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.133257 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.145711 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.146309 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.146337 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.146347 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.146360 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.146369 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.162556 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"message\\\":\\\"950305 6216 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 16:58:54.950333 6216 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 16:58:54.951875 6216 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 16:58:54.951935 6216 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 16:58:54.950560 6216 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:58:54.952165 6216 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 16:58:54.950649 6216 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:58:54.952387 6216 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:58:54.950695 6216 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 16:58:54.950802 6216 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 16:58:54.953854 6216 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 16:58:54.953985 6216 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"message\\\":\\\"ault : 1.853843ms\\\\nI1128 16:58:57.285639 6472 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI1128 16:58:57.285686 6472 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 16:58:57.285700 6472 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.801992ms\\\\nF1128 16:58:57.285702 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.172701 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.187230 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.209534 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.220704 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.232581 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.248467 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.248516 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.248529 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.248549 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.248563 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.351091 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.351126 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.351135 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.351151 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.351161 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.453394 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.453431 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.453440 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.453453 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.453463 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.497199 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:58:58 crc kubenswrapper[5024]: E1128 16:58:58.497359 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.513188 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.530236 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.545104 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.556095 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.556379 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.556463 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.556588 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.556672 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.559279 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.571651 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.589215 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.604290 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.614775 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.628417 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.643946 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.657432 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.658922 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.658956 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.658971 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.658991 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.659008 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.671913 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.684418 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.696682 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.714731 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.737172 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bfc8328da1d39e3f31d6309c3dbfe46d8c8db10195747b5d076e78a463ece9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"message\\\":\\\"950305 6216 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 16:58:54.950333 6216 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 16:58:54.951875 6216 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 16:58:54.951935 6216 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 16:58:54.950560 6216 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:58:54.952165 6216 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 16:58:54.950649 6216 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:58:54.952387 6216 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:58:54.950695 6216 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 16:58:54.950802 6216 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 16:58:54.953854 6216 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 16:58:54.953985 6216 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"message\\\":\\\"ault : 1.853843ms\\\\nI1128 16:58:57.285639 6472 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI1128 16:58:57.285686 6472 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 16:58:57.285700 6472 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.801992ms\\\\nF1128 16:58:57.285702 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.761513 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.761566 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.761576 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.761596 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.761608 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.805906 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs\") pod \"network-metrics-daemon-5t4kc\" (UID: \"949e234b-60b0-40e4-a423-0596dafd56c1\") " pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:58:58 crc kubenswrapper[5024]: E1128 16:58:58.806098 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:58:58 crc kubenswrapper[5024]: E1128 16:58:58.806162 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs podName:949e234b-60b0-40e4-a423-0596dafd56c1 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:02.806144488 +0000 UTC m=+44.855065393 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs") pod "network-metrics-daemon-5t4kc" (UID: "949e234b-60b0-40e4-a423-0596dafd56c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.864069 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.864120 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.864129 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.864145 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.864158 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.966692 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.966728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.966737 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.966751 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[5024]: I1128 16:58:58.966760 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.038511 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/1.log" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.069145 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.069183 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.069195 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.069211 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.069221 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.171910 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.171975 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.171985 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.172005 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.172042 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.274614 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.274655 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.274664 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.274680 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.274690 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.378045 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.378282 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.378295 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.378315 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.378328 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.480470 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.480515 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.480525 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.480542 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.480553 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.497409 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.497438 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.497433 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:59 crc kubenswrapper[5024]: E1128 16:58:59.498006 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:59 crc kubenswrapper[5024]: E1128 16:58:59.498173 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:59 crc kubenswrapper[5024]: E1128 16:58:59.498117 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.582735 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.582779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.582788 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.582803 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.582813 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.685273 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.685312 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.685321 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.685338 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.685348 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.788494 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.788557 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.788569 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.788588 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.788602 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.891336 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.891381 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.891400 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.891419 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.891431 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.993708 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.993754 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.993763 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.993779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[5024]: I1128 16:58:59.993790 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.095668 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.095733 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.095745 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.095778 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.095790 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.198264 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.198309 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.198319 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.198334 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.198348 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.300501 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.300548 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.300557 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.300579 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.300590 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.403966 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.404062 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.404085 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.404111 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.404130 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.497680 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:00 crc kubenswrapper[5024]: E1128 16:59:00.498106 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.506376 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.506442 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.506453 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.506473 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.506503 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.609494 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.609555 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.609570 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.609593 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.609608 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.695068 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.695117 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.695132 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.695149 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.695163 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[5024]: E1128 16:59:00.716540 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.721963 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.721997 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.722056 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.722074 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.722086 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[5024]: E1128 16:59:00.735499 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.740701 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.740731 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.740738 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.740754 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.740763 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[5024]: E1128 16:59:00.758151 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.764208 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.764266 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.764280 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.764301 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.764315 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[5024]: E1128 16:59:00.780769 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.785869 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.785913 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.785925 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.785944 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.785961 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[5024]: E1128 16:59:00.800540 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[5024]: E1128 16:59:00.800694 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.802656 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.802701 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.802712 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.802731 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.802744 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.905222 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.905257 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.905267 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.905284 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[5024]: I1128 16:59:00.905298 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.008133 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.008203 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.008216 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.008233 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.008247 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.110811 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.110881 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.110945 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.110974 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.110988 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.213840 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.213894 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.213906 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.213923 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.213934 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.320297 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.320337 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.320347 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.320364 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.320374 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.423698 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.423858 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.423882 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.423918 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.423937 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.497740 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.497790 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.497860 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:01 crc kubenswrapper[5024]: E1128 16:59:01.498305 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:01 crc kubenswrapper[5024]: E1128 16:59:01.498403 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:01 crc kubenswrapper[5024]: E1128 16:59:01.498559 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.527050 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.527271 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.527380 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.527496 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.527613 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.634639 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.634681 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.634690 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.634705 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.634715 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.737698 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.737749 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.737761 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.737779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.737793 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.841429 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.841479 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.841490 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.841510 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.841524 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.943922 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.943957 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.943965 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.943980 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[5024]: I1128 16:59:01.943990 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.046735 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.046789 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.046808 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.046834 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.046851 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.149436 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.149524 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.149564 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.149601 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.149624 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.251774 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.251812 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.251823 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.251838 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.251847 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.353919 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.353981 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.353994 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.354034 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.354046 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.456113 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.456162 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.456172 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.456189 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.456202 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.497893 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:02 crc kubenswrapper[5024]: E1128 16:59:02.498117 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.559002 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.559071 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.559084 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.559104 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.559120 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.662178 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.662262 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.662273 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.662287 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.662297 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.765267 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.765304 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.765313 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.765330 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.765342 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.844368 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs\") pod \"network-metrics-daemon-5t4kc\" (UID: \"949e234b-60b0-40e4-a423-0596dafd56c1\") " pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:02 crc kubenswrapper[5024]: E1128 16:59:02.844556 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:02 crc kubenswrapper[5024]: E1128 16:59:02.844621 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs podName:949e234b-60b0-40e4-a423-0596dafd56c1 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:10.844596168 +0000 UTC m=+52.893517073 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs") pod "network-metrics-daemon-5t4kc" (UID: "949e234b-60b0-40e4-a423-0596dafd56c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.868062 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.868106 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.868115 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.868131 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.868142 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.970833 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.970881 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.970890 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.970906 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[5024]: I1128 16:59:02.970917 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.073100 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.073203 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.073227 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.073245 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.073256 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.176413 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.177122 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.177154 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.177179 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.177191 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.232537 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.233372 5024 scope.go:117] "RemoveContainer" containerID="4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d" Nov 28 16:59:03 crc kubenswrapper[5024]: E1128 16:59:03.233532 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.249595 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.263793 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.338773 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.338833 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.338845 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.338864 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.338878 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.339771 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.354146 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.366427 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.382717 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.400132 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.413101 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.428723 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.442240 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.442284 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.442296 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.442316 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.442331 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.445718 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.462237 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.475311 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.489340 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.497006 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.497078 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.497135 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:03 crc kubenswrapper[5024]: E1128 16:59:03.497192 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:03 crc kubenswrapper[5024]: E1128 16:59:03.497322 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:03 crc kubenswrapper[5024]: E1128 16:59:03.497426 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.503530 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.521555 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.545473 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"message\\\":\\\"ault : 1.853843ms\\\\nI1128 16:58:57.285639 6472 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI1128 16:58:57.285686 6472 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 16:58:57.285700 6472 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.801992ms\\\\nF1128 16:58:57.285702 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.545721 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.545800 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.545810 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.545833 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.545846 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.648369 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.648414 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.648425 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.648442 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.648455 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.751728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.751777 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.751787 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.751806 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.751818 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.855071 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.855156 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.855181 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.855220 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.855246 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.958087 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.958165 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.958182 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.958209 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[5024]: I1128 16:59:03.958226 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.060449 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.060509 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.060521 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.060540 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.060551 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.164370 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.164423 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.164439 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.164459 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.164475 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.267525 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.267578 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.267597 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.267614 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.267629 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.371052 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.371094 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.371106 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.371122 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.371132 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.473790 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.473831 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.473840 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.473855 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.473863 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.497843 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:04 crc kubenswrapper[5024]: E1128 16:59:04.498067 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.576821 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.576881 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.576894 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.576922 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.576936 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.679544 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.679640 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.679655 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.679678 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.679693 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.782896 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.782959 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.782977 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.783001 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.783046 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.885668 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.885711 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.885718 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.885737 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.885761 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.989652 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.989700 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.989712 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.989728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[5024]: I1128 16:59:04.989739 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.093003 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.093079 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.093093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.093113 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.093128 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.196136 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.196203 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.196216 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.196237 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.196248 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.299287 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.299331 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.299344 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.299365 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.299377 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.402766 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.402827 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.402838 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.402856 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.402867 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.497062 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.497099 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.497099 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:05 crc kubenswrapper[5024]: E1128 16:59:05.497440 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:05 crc kubenswrapper[5024]: E1128 16:59:05.497269 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:05 crc kubenswrapper[5024]: E1128 16:59:05.497546 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.504850 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.504879 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.504889 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.504905 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.504916 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.608160 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.608207 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.608217 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.608234 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.608244 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.711721 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.711802 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.711822 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.711851 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.711870 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.814900 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.814957 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.814967 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.814986 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.815072 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.918287 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.918332 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.918340 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.918359 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[5024]: I1128 16:59:05.918374 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.021440 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.021490 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.021502 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.021524 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.021536 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.124012 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.124095 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.124106 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.124130 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.124148 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.227574 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.227637 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.227664 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.227690 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.227708 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.331271 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.331324 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.331334 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.331351 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.331365 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.434866 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.434924 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.434935 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.434955 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.434967 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.497834 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:06 crc kubenswrapper[5024]: E1128 16:59:06.498121 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.539802 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.539849 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.539859 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.539881 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.539890 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.642849 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.642903 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.642912 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.642931 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.642945 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.746185 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.746240 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.746249 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.746266 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.746276 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.849844 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.849930 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.849942 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.849964 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.849976 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.953113 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.953157 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.953166 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.953180 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[5024]: I1128 16:59:06.953190 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.056903 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.056981 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.057065 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.057097 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.057120 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.160089 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.160145 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.160156 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.160176 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.160190 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.264375 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.264434 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.264450 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.264475 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.264495 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.367636 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.367713 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.367726 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.367747 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.367783 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.471866 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.471929 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.471952 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.471979 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.471995 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.497357 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.497397 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.497464 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:07 crc kubenswrapper[5024]: E1128 16:59:07.497578 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:07 crc kubenswrapper[5024]: E1128 16:59:07.497826 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:07 crc kubenswrapper[5024]: E1128 16:59:07.497710 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.575493 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.575542 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.575554 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.575574 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.575585 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.678933 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.678993 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.679011 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.679152 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.679177 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.781794 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.781842 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.781852 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.781869 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.781879 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.884930 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.885004 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.885028 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.885047 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.885058 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.988798 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.988921 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.988938 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.988958 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[5024]: I1128 16:59:07.988973 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.091440 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.091506 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.091523 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.091550 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.091567 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.191137 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.195563 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.195617 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.195634 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.195662 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.195674 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.203829 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.207748 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.222415 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.233583 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.247614 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.259856 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.272143 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.282525 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.295468 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.298775 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.298803 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.298812 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.298829 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.298839 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.309449 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.328570 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.349380 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"message\\\":\\\"ault : 1.853843ms\\\\nI1128 16:58:57.285639 6472 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI1128 16:58:57.285686 6472 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 16:58:57.285700 6472 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.801992ms\\\\nF1128 16:58:57.285702 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.362807 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.375494 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.387476 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.399684 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.402158 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.402217 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.402229 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.402248 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.402258 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.409777 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.497670 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:08 crc kubenswrapper[5024]: E1128 16:59:08.497879 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.504656 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.504691 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.504701 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.504714 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.504723 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.512134 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.526594 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.541340 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.561184 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"message\\\":\\\"ault : 1.853843ms\\\\nI1128 16:58:57.285639 6472 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI1128 16:58:57.285686 6472 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 16:58:57.285700 6472 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.801992ms\\\\nF1128 16:58:57.285702 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.572372 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.588196 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.604036 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.607246 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.607310 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.607325 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.607343 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.607359 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.618595 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.631681 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.649289 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56755187-a7bb-4aab-bd0f-4fb1e7c81d66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.663843 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.678996 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.691599 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.707454 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.709696 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.709735 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.709745 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.709763 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.709774 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.720281 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.731997 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.743082 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.812895 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.812973 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.812986 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.813004 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.813044 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.916176 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.916263 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.916273 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.916293 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[5024]: I1128 16:59:08.916303 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.019124 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.019172 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.019186 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.019207 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.019219 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.121870 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.121935 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.121958 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.121990 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.122046 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.225362 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.225454 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.225478 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.225507 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.225527 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.328670 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.328722 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.328739 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.328764 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.328782 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.432336 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.432406 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.432424 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.432450 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.432466 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.497213 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.497341 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:09 crc kubenswrapper[5024]: E1128 16:59:09.497425 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:09 crc kubenswrapper[5024]: E1128 16:59:09.497557 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.497213 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:09 crc kubenswrapper[5024]: E1128 16:59:09.497716 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.535381 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.535442 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.535465 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.535494 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.535515 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.640007 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.640123 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.640146 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.640176 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.640198 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.744193 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.744253 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.744279 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.744363 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.744392 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.847699 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.847805 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.847830 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.847858 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.847876 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.950755 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.950825 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.950843 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.950871 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[5024]: I1128 16:59:09.950889 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.054513 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.054590 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.054613 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.054647 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.054669 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.158059 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.158118 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.158137 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.158164 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.158194 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.261298 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.261383 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.261408 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.261441 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.261465 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.368706 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.368797 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.368910 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.368981 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.369009 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.473127 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.474162 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.474202 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.474234 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.474256 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.497770 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:10 crc kubenswrapper[5024]: E1128 16:59:10.498110 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.576863 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.576915 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.576927 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.576950 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.576967 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.680147 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.680197 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.680211 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.680232 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.680246 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.783892 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.783959 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.783977 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.784000 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.784038 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.887847 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.887929 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.887955 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.887986 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.888011 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.938653 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs\") pod \"network-metrics-daemon-5t4kc\" (UID: \"949e234b-60b0-40e4-a423-0596dafd56c1\") " pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:10 crc kubenswrapper[5024]: E1128 16:59:10.938912 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:10 crc kubenswrapper[5024]: E1128 16:59:10.939016 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs podName:949e234b-60b0-40e4-a423-0596dafd56c1 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:26.938989958 +0000 UTC m=+68.987910893 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs") pod "network-metrics-daemon-5t4kc" (UID: "949e234b-60b0-40e4-a423-0596dafd56c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.974277 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.974359 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.974382 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.974415 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[5024]: I1128 16:59:10.974435 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.001125 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:10Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.007532 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.007592 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.007615 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.007645 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.007665 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.023145 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.028406 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.028462 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.028478 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.028501 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.028517 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.047085 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.053366 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.053418 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.053431 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.053452 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.053465 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.068887 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.073853 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.073901 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.073919 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.073944 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.073962 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.091904 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.092248 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.094330 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.094377 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.094416 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.094438 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.094450 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.198248 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.198318 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.198342 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.198382 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.198407 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.302294 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.302366 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.302385 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.302418 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.302439 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.406005 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.406107 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.406125 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.406157 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.406178 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.497614 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.497698 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.497630 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.497804 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.497991 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.498066 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.508491 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.508534 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.508546 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.508563 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.508575 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.545234 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.545392 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 16:59:43.545366093 +0000 UTC m=+85.594287008 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.611450 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.611514 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.611531 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.611553 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.611566 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.646640 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.646716 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.646833 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.646861 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.647001 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:43.64696734 +0000 UTC m=+85.695888285 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.647243 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:43.647178467 +0000 UTC m=+85.696099382 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.714881 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.714919 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.714931 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.714946 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.714958 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.748863 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.748940 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.749152 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.749172 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.749184 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.749247 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:43.749232198 +0000 UTC m=+85.798153103 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.749381 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.749428 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.749449 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:11 crc kubenswrapper[5024]: E1128 16:59:11.749542 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:43.749514036 +0000 UTC m=+85.798434971 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.817745 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.817805 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.817818 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.817838 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.817850 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.920869 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.921494 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.921725 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.922096 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[5024]: I1128 16:59:11.922328 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.025845 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.025895 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.025908 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.025928 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.025942 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.128561 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.128628 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.128640 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.128660 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.128677 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.231607 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.231675 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.231687 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.231712 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.231727 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.334985 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.335076 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.335091 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.335112 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.335123 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.438175 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.438286 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.438304 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.438334 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.438352 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.498156 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:12 crc kubenswrapper[5024]: E1128 16:59:12.498373 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.541337 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.541379 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.541389 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.541404 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.541414 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.644559 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.644657 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.644671 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.644695 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.644713 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.748408 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.748470 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.748483 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.748504 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.748518 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.852263 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.852336 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.852348 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.852366 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.852378 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.956743 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.956807 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.956819 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.956838 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[5024]: I1128 16:59:12.956850 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.060864 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.060920 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.060934 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.060964 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.060981 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.164234 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.164303 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.164318 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.164343 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.164358 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.268451 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.268527 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.268545 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.268570 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.268591 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.371628 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.371669 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.371685 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.371706 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.371717 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.475515 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.475615 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.475640 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.475692 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.475717 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.497067 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.497174 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.497089 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:13 crc kubenswrapper[5024]: E1128 16:59:13.497302 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:13 crc kubenswrapper[5024]: E1128 16:59:13.497446 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:13 crc kubenswrapper[5024]: E1128 16:59:13.497518 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.578980 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.579051 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.579064 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.579084 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.579097 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.682153 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.682213 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.682223 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.682243 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.682255 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.785194 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.785234 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.785243 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.785260 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.785271 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.888363 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.888413 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.888423 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.888445 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.888472 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.991286 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.991339 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.991354 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.991372 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[5024]: I1128 16:59:13.991384 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.094571 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.095186 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.095201 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.095216 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.095226 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.198621 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.198673 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.198717 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.198733 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.198746 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.301767 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.301821 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.301834 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.301860 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.301874 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.405590 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.405632 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.405642 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.405657 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.405666 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.497830 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:14 crc kubenswrapper[5024]: E1128 16:59:14.498096 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.507695 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.507749 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.507759 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.507781 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.507792 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.610493 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.610555 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.610567 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.610593 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.610625 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.713521 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.713562 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.713574 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.713594 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.713607 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.817693 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.817748 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.817758 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.817776 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.817786 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.920454 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.920527 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.920538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.920558 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[5024]: I1128 16:59:14.920569 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.023228 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.023278 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.023288 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.023306 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.023317 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.126085 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.126183 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.126203 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.126234 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.126252 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.229292 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.229352 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.229370 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.229394 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.229411 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.336299 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.336341 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.336350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.336364 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.336374 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.439277 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.439333 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.439345 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.439366 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.439381 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.497203 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.497253 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.497328 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:15 crc kubenswrapper[5024]: E1128 16:59:15.497375 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:15 crc kubenswrapper[5024]: E1128 16:59:15.497514 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:15 crc kubenswrapper[5024]: E1128 16:59:15.497632 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.542655 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.542739 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.542757 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.542779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.542799 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.645638 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.645683 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.645694 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.645711 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.645723 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.748536 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.748577 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.748589 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.748607 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.748619 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.852890 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.853294 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.853326 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.853363 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.853389 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.956661 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.956724 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.956736 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.956760 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[5024]: I1128 16:59:15.956774 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.060116 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.060197 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.060223 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.060256 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.060285 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.164258 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.164329 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.164350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.164384 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.164403 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.268921 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.269001 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.269058 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.269094 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.269117 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.372541 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.372642 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.372662 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.372691 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.372714 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.477132 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.477214 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.477232 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.477264 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.477281 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.497533 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:16 crc kubenswrapper[5024]: E1128 16:59:16.498287 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.498860 5024 scope.go:117] "RemoveContainer" containerID="4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.580842 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.580915 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.580932 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.580957 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.580976 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.685977 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.686048 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.686060 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.686082 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.686093 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.788399 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.788451 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.788461 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.788477 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.788533 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.890719 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.890771 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.890784 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.890802 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.890841 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.993546 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.993583 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.993592 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.993607 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[5024]: I1128 16:59:16.993616 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.096679 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.096737 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.096753 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.096775 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.096791 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.112303 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/1.log" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.116085 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerStarted","Data":"98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1"} Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.116694 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.133640 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.150322 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.172423 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.193205 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"message\\\":\\\"ault : 1.853843ms\\\\nI1128 16:58:57.285639 6472 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI1128 16:58:57.285686 6472 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 16:58:57.285700 6472 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.801992ms\\\\nF1128 16:58:57.285702 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.199082 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.199159 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.199178 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.199198 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.199215 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.209579 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.224307 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.243630 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.260282 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.276002 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.300480 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.302462 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.302509 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.302519 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.302537 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.302548 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.322150 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.335369 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.355834 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56755187-a7bb-4aab-bd0f-4fb1e7c81d66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.372357 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.397656 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.405238 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.405297 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.405309 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.405330 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.405341 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.411745 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.425919 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:17Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.498075 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.498192 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:17 crc kubenswrapper[5024]: E1128 16:59:17.498301 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.498227 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:17 crc kubenswrapper[5024]: E1128 16:59:17.498509 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:17 crc kubenswrapper[5024]: E1128 16:59:17.498561 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.508886 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.508966 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.508982 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.509006 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.509057 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.613156 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.613279 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.613346 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.613384 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.613442 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.716829 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.716892 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.716903 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.716936 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.716948 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.820597 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.820683 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.820711 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.820741 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.820760 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.924728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.924867 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.924890 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.924917 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[5024]: I1128 16:59:17.924967 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.027930 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.027983 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.027996 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.028036 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.028056 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.123364 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/2.log" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.124524 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/1.log" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.137951 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerID="98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1" exitCode=1 Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.138076 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerDied","Data":"98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1"} Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.138123 5024 scope.go:117] "RemoveContainer" containerID="4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.139216 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.139246 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.139522 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.139577 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.139594 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.140568 5024 scope.go:117] "RemoveContainer" containerID="98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1" Nov 28 16:59:18 crc kubenswrapper[5024]: E1128 16:59:18.141227 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.159776 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56755187-a7bb-4aab-bd0f-4fb1e7c81d66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.178453 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.194640 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.210158 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.228417 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.243887 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.243954 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.243976 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.244004 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.244047 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.244326 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.260217 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.275135 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.290754 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.312168 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.331051 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.347407 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.348171 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.348198 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.348224 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.348242 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.358977 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"message\\\":\\\"ault : 1.853843ms\\\\nI1128 16:58:57.285639 6472 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI1128 16:58:57.285686 6472 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 16:58:57.285700 6472 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.801992ms\\\\nF1128 16:58:57.285702 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:17Z\\\",\\\"message\\\":\\\"5415 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g after 0 failed attempt(s)\\\\nI1128 16:59:17.555421 6669 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 16:59:17.555430 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 16:59:17.555441 6669 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1128 16:59:17.555452 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 16:59:17.555136 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1128 16:59:17.555455 6669 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.382289 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.399899 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.414891 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.430690 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.444427 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.451346 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.451408 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.451431 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.451454 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.451468 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.498048 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:18 crc kubenswrapper[5024]: E1128 16:59:18.498683 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.517120 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.529695 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.542068 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.554975 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.555057 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.555072 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.555092 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.555106 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.558828 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.572146 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.588699 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.609393 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.632559 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b61713d61dbe72809cefd4337a31fbfc821bae9609c4be6e15a66d3bc389a7d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"message\\\":\\\"ault : 1.853843ms\\\\nI1128 16:58:57.285639 6472 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI1128 16:58:57.285686 6472 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 16:58:57.285700 6472 services_controller.go:360] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator for network=default : 1.801992ms\\\\nF1128 16:58:57.285702 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:17Z\\\",\\\"message\\\":\\\"5415 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g after 0 failed attempt(s)\\\\nI1128 16:59:17.555421 6669 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 16:59:17.555430 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 16:59:17.555441 6669 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1128 16:59:17.555452 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 16:59:17.555136 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1128 16:59:17.555455 6669 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.654230 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.659492 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.659533 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.659545 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.659560 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.659571 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.672139 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.690794 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.706930 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.718568 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.741219 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.763448 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.763716 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.763742 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.763755 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.763774 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.763789 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.777930 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.790720 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56755187-a7bb-4aab-bd0f-4fb1e7c81d66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:18Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.867070 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.867396 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.867481 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.867643 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.867743 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.971008 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.971071 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.971083 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.971102 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[5024]: I1128 16:59:18.971115 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.074195 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.074648 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.074908 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.075278 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.076058 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.143210 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/2.log" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.147141 5024 scope.go:117] "RemoveContainer" containerID="98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1" Nov 28 16:59:19 crc kubenswrapper[5024]: E1128 16:59:19.147433 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.164001 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.177240 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.184248 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.184535 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.184550 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.184573 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.184585 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.194528 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.208321 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.221799 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.238209 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.259508 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.282398 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:17Z\\\",\\\"message\\\":\\\"5415 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g after 0 failed attempt(s)\\\\nI1128 16:59:17.555421 6669 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 16:59:17.555430 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 16:59:17.555441 6669 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1128 16:59:17.555452 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 16:59:17.555136 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1128 16:59:17.555455 6669 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.287535 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.287567 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.287578 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.287593 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.287603 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.297716 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.312942 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.329586 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.350568 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.366001 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.381111 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56755187-a7bb-4aab-bd0f-4fb1e7c81d66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.390512 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.390565 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.390577 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.390598 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.390610 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.398491 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.412772 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.428813 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.493959 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.494007 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.494060 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.494081 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.494092 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.497631 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.497640 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:19 crc kubenswrapper[5024]: E1128 16:59:19.497783 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.497898 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:19 crc kubenswrapper[5024]: E1128 16:59:19.497998 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:19 crc kubenswrapper[5024]: E1128 16:59:19.498113 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.596267 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.596321 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.596336 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.596355 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.596367 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.698779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.698843 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.698873 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.698902 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.698918 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.802218 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.802282 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.802302 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.802325 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.802343 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.905751 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.905798 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.905808 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.905827 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[5024]: I1128 16:59:19.905836 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.008478 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.008523 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.008534 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.008551 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.008562 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.111371 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.111436 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.111450 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.111471 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.111485 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.214849 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.214910 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.214923 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.214944 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.214961 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.318355 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.318422 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.318438 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.318463 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.318482 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.421612 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.421687 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.421701 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.421722 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.421737 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.497923 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:20 crc kubenswrapper[5024]: E1128 16:59:20.498173 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.524806 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.524862 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.524872 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.524890 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.524901 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.627528 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.627563 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.627572 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.627589 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.627600 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.730439 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.730488 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.730498 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.730513 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.730523 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.834160 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.834222 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.834237 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.834255 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.834271 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.936839 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.936905 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.936916 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.936941 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[5024]: I1128 16:59:20.936955 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.040241 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.040290 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.040301 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.040318 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.040329 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.119624 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.119684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.119698 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.119724 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.119743 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[5024]: E1128 16:59:21.137333 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.143162 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.143205 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.143222 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.143245 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.143259 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[5024]: E1128 16:59:21.160764 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.165635 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.165692 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.165707 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.165728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.165745 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[5024]: E1128 16:59:21.181946 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.188634 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.188693 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.188706 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.188726 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.188740 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[5024]: E1128 16:59:21.206917 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.211975 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.212046 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.212056 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.212073 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.212085 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[5024]: E1128 16:59:21.232652 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[5024]: E1128 16:59:21.232779 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.234829 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.234883 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.234897 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.234916 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.234929 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.338871 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.338932 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.338943 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.338963 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.338976 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.442474 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.442552 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.442573 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.442604 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.442621 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.497752 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.497767 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:21 crc kubenswrapper[5024]: E1128 16:59:21.498100 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.497797 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:21 crc kubenswrapper[5024]: E1128 16:59:21.498300 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:21 crc kubenswrapper[5024]: E1128 16:59:21.498467 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.545846 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.545925 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.545945 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.545973 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.545993 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.648970 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.649102 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.649129 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.649162 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.649184 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.752231 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.752283 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.752295 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.752315 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.752329 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.855466 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.855519 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.855529 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.855548 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.855559 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.958329 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.958417 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.958431 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.958448 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[5024]: I1128 16:59:21.958459 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.062577 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.062650 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.062667 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.062693 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.062712 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.165693 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.165764 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.165782 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.165809 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.165829 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.268541 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.268609 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.268622 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.268645 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.268660 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.372081 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.372156 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.372171 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.372193 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.372207 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.475617 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.475671 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.475684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.475704 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.475718 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.497909 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:22 crc kubenswrapper[5024]: E1128 16:59:22.498066 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.578074 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.578133 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.578146 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.578167 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.578181 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.680845 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.681178 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.681265 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.681358 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.681438 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.786798 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.787196 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.787299 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.787386 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.787502 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.890180 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.890426 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.890435 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.890460 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.890470 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.993795 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.993854 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.993867 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.993891 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[5024]: I1128 16:59:22.993904 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.096168 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.096211 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.096222 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.096238 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.096250 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.199257 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.199535 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.199562 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.199587 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.199605 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.302669 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.302744 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.302772 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.302805 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.302831 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.406151 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.406218 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.406235 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.406261 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.406278 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.497517 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.497693 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:23 crc kubenswrapper[5024]: E1128 16:59:23.497739 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:23 crc kubenswrapper[5024]: E1128 16:59:23.497918 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.497541 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:23 crc kubenswrapper[5024]: E1128 16:59:23.498132 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.509703 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.509747 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.509761 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.509779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.509796 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.643244 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.643646 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.643736 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.643839 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.643919 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.746664 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.746720 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.746731 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.746746 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.746755 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.849656 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.850107 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.850266 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.850380 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.850468 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.953423 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.953487 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.953504 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.953532 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[5024]: I1128 16:59:23.953550 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.056647 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.057180 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.057585 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.057740 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.057863 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.161528 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.162249 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.162460 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.162635 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.162809 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.266875 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.266956 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.266980 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.267058 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.267087 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.370492 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.370567 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.370581 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.370602 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.370615 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.472957 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.473001 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.473010 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.473043 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.473053 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.497248 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:24 crc kubenswrapper[5024]: E1128 16:59:24.497440 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.575627 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.575684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.575695 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.575714 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.575725 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.678830 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.678909 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.678928 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.678964 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.678990 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.782775 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.782874 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.782894 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.782953 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.782975 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.886544 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.886596 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.886613 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.886635 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.886652 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.989858 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.989929 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.989952 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.989981 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[5024]: I1128 16:59:24.990003 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.093032 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.093082 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.093094 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.093112 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.093123 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.207328 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.207382 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.207404 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.207432 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.207452 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.310209 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.310292 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.310316 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.310350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.310373 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.414383 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.414448 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.414465 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.414488 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.414504 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.496975 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.497053 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.497089 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:25 crc kubenswrapper[5024]: E1128 16:59:25.497271 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:25 crc kubenswrapper[5024]: E1128 16:59:25.497431 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:25 crc kubenswrapper[5024]: E1128 16:59:25.497593 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.517483 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.517542 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.517562 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.517593 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.517613 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.620577 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.620647 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.620660 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.620684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.620698 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.722788 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.722820 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.722830 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.722847 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.722858 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.825874 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.825956 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.825978 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.826003 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.826056 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.928735 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.928784 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.928792 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.928806 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[5024]: I1128 16:59:25.928815 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.031986 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.032039 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.032049 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.032067 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.032078 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.135526 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.135600 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.135627 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.135655 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.135673 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.239338 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.239397 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.239413 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.239436 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.239455 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.342158 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.342239 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.342257 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.342286 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.342326 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.445962 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.446090 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.446134 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.446180 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.446208 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.497745 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:26 crc kubenswrapper[5024]: E1128 16:59:26.497991 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.549433 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.549489 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.549501 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.549519 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.549533 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.652274 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.652328 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.652342 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.652361 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.652375 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.755653 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.755702 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.755711 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.755728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.755741 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.858293 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.858350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.858366 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.858393 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.858413 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.961538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.961599 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.961610 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.961628 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.961639 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[5024]: E1128 16:59:26.980117 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:26 crc kubenswrapper[5024]: E1128 16:59:26.980225 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs podName:949e234b-60b0-40e4-a423-0596dafd56c1 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:58.980201801 +0000 UTC m=+101.029122706 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs") pod "network-metrics-daemon-5t4kc" (UID: "949e234b-60b0-40e4-a423-0596dafd56c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:26 crc kubenswrapper[5024]: I1128 16:59:26.979932 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs\") pod \"network-metrics-daemon-5t4kc\" (UID: \"949e234b-60b0-40e4-a423-0596dafd56c1\") " pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.064954 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.065035 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.065047 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.065066 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.065077 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.167672 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.167727 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.167737 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.167755 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.167766 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.270161 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.270221 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.270234 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.270254 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.270266 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.374095 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.374175 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.374200 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.374229 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.374250 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.477501 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.477562 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.477575 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.477597 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.477611 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.497455 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.497455 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.497575 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:27 crc kubenswrapper[5024]: E1128 16:59:27.497609 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:27 crc kubenswrapper[5024]: E1128 16:59:27.497655 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:27 crc kubenswrapper[5024]: E1128 16:59:27.497727 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.581499 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.581548 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.581559 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.581580 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.581592 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.685111 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.685162 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.685175 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.685194 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.685207 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.789897 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.789938 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.789948 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.789964 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.789975 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.892594 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.892639 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.892648 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.892662 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.892673 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.995095 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.995135 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.995145 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.995165 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[5024]: I1128 16:59:27.995180 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.097997 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.098059 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.098070 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.098085 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.098093 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.200524 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.200565 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.200576 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.200594 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.200630 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.303755 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.303830 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.303845 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.303863 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.303875 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.406983 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.407050 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.407062 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.407085 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.407098 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.497150 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:28 crc kubenswrapper[5024]: E1128 16:59:28.497300 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.509679 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.509733 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.509753 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.509776 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.509793 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.516285 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.530690 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.544347 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.560947 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.579148 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.596997 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.612470 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.612524 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.612537 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.612558 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.612573 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.616912 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.642949 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:17Z\\\",\\\"message\\\":\\\"5415 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g after 0 failed attempt(s)\\\\nI1128 16:59:17.555421 6669 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 16:59:17.555430 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 16:59:17.555441 6669 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1128 16:59:17.555452 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 16:59:17.555136 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1128 16:59:17.555455 6669 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.661494 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.681141 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.698198 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.713589 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.714839 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.714882 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.714894 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.714913 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.714923 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.727496 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.743176 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.759598 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.772395 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.787707 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56755187-a7bb-4aab-bd0f-4fb1e7c81d66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:28Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.819303 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.819650 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.819714 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.819783 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.819853 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.923078 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.923120 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.923130 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.923153 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[5024]: I1128 16:59:28.923164 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.025693 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.025747 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.025757 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.025777 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.025788 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.129342 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.130160 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.130236 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.130311 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.130380 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.233571 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.233634 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.233651 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.233676 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.233693 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.336737 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.337326 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.337448 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.337569 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.337712 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.440861 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.440909 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.440919 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.440937 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.440947 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.497447 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.497602 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:29 crc kubenswrapper[5024]: E1128 16:59:29.497616 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:29 crc kubenswrapper[5024]: E1128 16:59:29.497851 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.498212 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:29 crc kubenswrapper[5024]: E1128 16:59:29.498545 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.543983 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.544083 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.544093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.544109 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.544120 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.651530 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.652068 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.652278 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.652475 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.652798 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.756074 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.756126 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.756138 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.756157 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.756171 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.859467 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.859526 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.859537 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.859555 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.859570 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.962118 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.962167 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.962179 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.962198 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[5024]: I1128 16:59:29.962211 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.065496 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.065573 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.065583 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.065600 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.065612 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.168815 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.170229 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.170265 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.170295 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.170312 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.273469 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.273530 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.273543 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.273566 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.273580 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.377478 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.377534 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.377549 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.377569 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.377580 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.480426 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.480474 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.480482 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.480498 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.480510 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.497853 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:30 crc kubenswrapper[5024]: E1128 16:59:30.498070 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.584135 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.584188 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.584204 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.584237 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.584252 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.687248 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.687311 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.687330 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.687351 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.687364 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.791497 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.791555 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.791567 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.791589 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.791602 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.894727 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.894776 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.894790 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.894808 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[5024]: I1128 16:59:30.894820 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.021104 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.021187 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.021215 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.021252 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.021277 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.124793 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.124842 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.124853 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.124871 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.124883 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.189076 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4vh86_97cac632-c692-414d-b0cf-605f0bb7719b/kube-multus/0.log" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.189145 5024 generic.go:334] "Generic (PLEG): container finished" podID="97cac632-c692-414d-b0cf-605f0bb7719b" containerID="a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216" exitCode=1 Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.189191 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4vh86" event={"ID":"97cac632-c692-414d-b0cf-605f0bb7719b","Type":"ContainerDied","Data":"a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216"} Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.189691 5024 scope.go:117] "RemoveContainer" containerID="a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.213977 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.226989 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.227068 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.227081 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.227101 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.227114 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.232065 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.246843 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.261048 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.277040 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.278784 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.278877 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.278893 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.278910 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.278923 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[5024]: E1128 16:59:31.297591 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.302734 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:17Z\\\",\\\"message\\\":\\\"5415 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g after 0 failed attempt(s)\\\\nI1128 16:59:17.555421 6669 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 16:59:17.555430 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 16:59:17.555441 6669 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1128 16:59:17.555452 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 16:59:17.555136 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1128 16:59:17.555455 6669 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.303784 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.303846 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.303861 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.303882 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.303895 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[5024]: E1128 16:59:31.319525 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.319979 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.324850 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.324903 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.324917 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.324938 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.324955 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.338889 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:30Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08\\\\n2025-11-28T16:58:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08 to /host/opt/cni/bin/\\\\n2025-11-28T16:58:45Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:45Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: E1128 16:59:31.343779 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.351901 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.351959 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.351972 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.351994 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.352006 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.361996 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: E1128 16:59:31.363140 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.367596 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.367650 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.367661 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.367684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.367696 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[5024]: E1128 16:59:31.381108 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: E1128 16:59:31.381308 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.382045 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.383844 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.383895 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.383907 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.383931 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.383943 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.393556 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.408772 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.420739 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.431530 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.446135 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56755187-a7bb-4aab-bd0f-4fb1e7c81d66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.462275 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.478930 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.497279 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.497317 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.497298 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:31 crc kubenswrapper[5024]: E1128 16:59:31.497424 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:31 crc kubenswrapper[5024]: E1128 16:59:31.497988 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:31 crc kubenswrapper[5024]: E1128 16:59:31.497828 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.626265 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.626324 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.626336 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.626371 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.626383 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.729087 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.729163 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.729181 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.729219 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.729231 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.832953 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.833005 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.833027 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.833044 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.833059 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.936201 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.936236 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.936246 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.936290 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[5024]: I1128 16:59:31.936305 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.038902 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.038984 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.038997 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.039059 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.039076 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.142425 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.142473 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.142484 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.142502 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.142515 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.196072 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4vh86_97cac632-c692-414d-b0cf-605f0bb7719b/kube-multus/0.log" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.196201 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4vh86" event={"ID":"97cac632-c692-414d-b0cf-605f0bb7719b","Type":"ContainerStarted","Data":"fddcf1223db1eb698e609489771d1fd1fd040bb9f4df3b4d69e38e8f827ee2b6"} Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.215352 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56755187-a7bb-4aab-bd0f-4fb1e7c81d66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.257091 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.257145 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.257168 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.257202 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.257214 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.280731 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.318774 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.334589 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.351861 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.359591 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.359638 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.359659 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.359717 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.359733 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.366380 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.380743 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.393943 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.414854 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:17Z\\\",\\\"message\\\":\\\"5415 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g after 0 failed attempt(s)\\\\nI1128 16:59:17.555421 6669 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 16:59:17.555430 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 16:59:17.555441 6669 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1128 16:59:17.555452 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 16:59:17.555136 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1128 16:59:17.555455 6669 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.428303 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.442201 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcf1223db1eb698e609489771d1fd1fd040bb9f4df3b4d69e38e8f827ee2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:30Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08\\\\n2025-11-28T16:58:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08 to /host/opt/cni/bin/\\\\n2025-11-28T16:58:45Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:45Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.458962 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.462913 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.462953 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.462966 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.462984 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.462996 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.475081 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.487777 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.497821 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:32 crc kubenswrapper[5024]: E1128 16:59:32.497992 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.510369 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.529430 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.547491 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:32Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.565868 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.565943 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.565956 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.565984 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.566000 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.668598 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.668645 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.668656 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.668680 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.668691 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.771139 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.771268 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.771279 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.771300 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.771312 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.874813 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.874874 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.874886 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.874904 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.874917 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.978439 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.978536 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.978560 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.978616 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[5024]: I1128 16:59:32.978643 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.082070 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.082126 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.082135 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.082153 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.082167 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.185322 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.185366 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.185377 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.185396 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.185408 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.287747 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.287808 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.287826 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.287854 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.287872 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.392225 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.392286 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.392306 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.392331 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.392351 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.496641 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.496707 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.496720 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.496739 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.496752 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.496949 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.497061 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.497107 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:33 crc kubenswrapper[5024]: E1128 16:59:33.497205 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:33 crc kubenswrapper[5024]: E1128 16:59:33.497287 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:33 crc kubenswrapper[5024]: E1128 16:59:33.497345 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.498726 5024 scope.go:117] "RemoveContainer" containerID="98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1" Nov 28 16:59:33 crc kubenswrapper[5024]: E1128 16:59:33.499062 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.599953 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.600080 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.600108 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.600137 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.600158 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.708379 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.708433 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.708451 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.708476 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.708495 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.812125 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.812189 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.812206 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.812232 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.812253 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.916093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.916169 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.916187 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.916229 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[5024]: I1128 16:59:33.916252 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.019406 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.019480 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.019500 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.019535 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.019559 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.123344 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.123425 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.123444 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.123470 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.123490 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.227450 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.227535 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.227558 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.227587 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.227609 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.330347 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.330388 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.330400 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.330418 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.330432 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.433934 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.434062 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.434075 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.434093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.434104 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.497222 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:34 crc kubenswrapper[5024]: E1128 16:59:34.497444 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.537753 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.537810 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.537829 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.537855 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.537881 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.641161 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.641210 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.641257 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.641280 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.641290 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.744478 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.744542 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.744561 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.744587 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.744616 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.848561 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.848649 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.848663 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.848688 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.848705 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.951911 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.951988 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.951999 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.952064 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[5024]: I1128 16:59:34.952076 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.055756 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.055822 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.055843 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.055873 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.055902 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.160171 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.160231 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.160247 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.160265 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.160276 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.264305 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.264356 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.264365 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.264387 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.264399 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.367632 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.367686 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.367696 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.367723 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.367737 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.471383 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.471463 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.471488 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.471551 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.471576 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.497272 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.497272 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.497295 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:35 crc kubenswrapper[5024]: E1128 16:59:35.497682 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:35 crc kubenswrapper[5024]: E1128 16:59:35.497784 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:35 crc kubenswrapper[5024]: E1128 16:59:35.497785 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.575041 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.575106 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.575121 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.575140 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.575153 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.678040 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.678081 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.678090 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.678105 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.678116 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.780535 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.780588 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.780600 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.780623 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.780638 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.883212 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.883253 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.883264 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.883282 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.883296 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.986092 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.986146 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.986157 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.986173 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[5024]: I1128 16:59:35.986207 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.089758 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.089811 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.089821 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.089840 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.089852 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.192913 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.192978 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.192996 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.193152 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.193243 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.296719 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.296777 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.296794 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.296817 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.296831 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.400335 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.400401 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.400415 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.400436 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.400449 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.498156 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:36 crc kubenswrapper[5024]: E1128 16:59:36.498489 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.503074 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.503120 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.503135 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.503156 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.503169 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.606236 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.606328 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.606363 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.606401 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.606428 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.709313 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.709353 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.709362 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.709380 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.709391 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.812472 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.812524 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.812538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.812555 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.812568 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.914568 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.914608 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.914618 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.914654 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[5024]: I1128 16:59:36.914672 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.017874 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.017922 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.017934 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.017951 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.017963 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.120321 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.120415 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.120432 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.120450 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.120461 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.222169 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.222224 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.222240 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.222261 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.222277 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.326193 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.326226 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.326235 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.326254 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.326267 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.429396 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.430011 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.430226 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.430302 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.430380 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.497333 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:37 crc kubenswrapper[5024]: E1128 16:59:37.497782 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.498124 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:37 crc kubenswrapper[5024]: E1128 16:59:37.498292 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.498542 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:37 crc kubenswrapper[5024]: E1128 16:59:37.498714 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.533647 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.533981 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.534186 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.534287 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.534448 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.638650 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.639407 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.639555 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.639673 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.639740 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.742885 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.742924 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.742934 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.742955 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.742969 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.846052 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.846135 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.846148 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.846172 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.846190 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.950591 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.951349 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.951436 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.951474 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[5024]: I1128 16:59:37.951532 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.054448 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.054504 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.054515 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.054536 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.054548 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.158448 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.158512 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.158526 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.158547 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.158567 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.262115 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.262247 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.262315 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.262352 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.262484 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.366770 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.366853 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.366879 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.366916 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.366941 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.470003 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.470093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.470106 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.470127 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.470138 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.497495 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:38 crc kubenswrapper[5024]: E1128 16:59:38.497921 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.517893 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.518142 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.537997 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcf1223db1eb698e609489771d1fd1fd040bb9f4df3b4d69e38e8f827ee2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:30Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08\\\\n2025-11-28T16:58:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08 to /host/opt/cni/bin/\\\\n2025-11-28T16:58:45Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:45Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.559176 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.572749 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.572811 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.572832 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.572860 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.572875 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.585706 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:17Z\\\",\\\"message\\\":\\\"5415 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g after 0 failed attempt(s)\\\\nI1128 16:59:17.555421 6669 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 16:59:17.555430 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 16:59:17.555441 6669 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1128 16:59:17.555452 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 16:59:17.555136 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1128 16:59:17.555455 6669 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.605423 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.621255 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.637636 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.651196 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.665976 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.676271 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.676338 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.676349 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.676369 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.676385 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.683550 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56755187-a7bb-4aab-bd0f-4fb1e7c81d66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.706106 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.722951 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.745729 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.759804 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.772259 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.781804 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.781865 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.781877 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.781896 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.782238 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.787231 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.802618 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:38Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.885479 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.885520 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.885532 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.885550 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.885562 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.988170 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.988214 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.988223 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.988240 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[5024]: I1128 16:59:38.988250 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.091179 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.091252 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.091276 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.091305 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.091325 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.193993 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.194061 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.194071 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.194092 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.194103 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.296815 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.296871 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.296883 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.296905 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.296949 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.399590 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.399669 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.399693 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.399725 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.399747 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.497842 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.497986 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.498096 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:39 crc kubenswrapper[5024]: E1128 16:59:39.498132 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:39 crc kubenswrapper[5024]: E1128 16:59:39.498299 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:39 crc kubenswrapper[5024]: E1128 16:59:39.498470 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.503928 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.504009 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.504061 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.504096 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.504121 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.607445 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.607527 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.607544 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.607571 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.607589 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.710562 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.710617 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.710633 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.710654 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.710669 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.813934 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.813992 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.814010 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.814042 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.814057 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.916786 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.916842 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.916854 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.916872 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[5024]: I1128 16:59:39.916884 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.019592 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.019639 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.019651 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.019670 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.019683 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.122931 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.122987 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.123003 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.123037 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.123050 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.226682 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.226769 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.226801 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.226850 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.226871 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.331262 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.331330 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.331345 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.331368 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.331390 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.434517 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.434568 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.434580 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.434599 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.434612 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.497718 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:40 crc kubenswrapper[5024]: E1128 16:59:40.497973 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.542463 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.542538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.542549 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.542570 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.542591 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.645686 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.645750 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.645807 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.645829 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.645843 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.750325 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.750471 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.750492 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.750519 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.750539 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.854835 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.854949 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.855075 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.855172 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.855208 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.958079 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.958162 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.958193 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.958289 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[5024]: I1128 16:59:40.958303 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.060744 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.060785 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.060794 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.060809 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.060818 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.163584 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.163654 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.163668 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.163690 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.163706 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.266669 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.266720 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.266731 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.266746 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.266758 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.369167 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.369594 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.369680 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.369765 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.369831 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.472890 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.472991 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.473010 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.473070 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.473096 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.497361 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.497356 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.497396 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:41 crc kubenswrapper[5024]: E1128 16:59:41.497975 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:41 crc kubenswrapper[5024]: E1128 16:59:41.498182 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:41 crc kubenswrapper[5024]: E1128 16:59:41.497817 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.576413 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.576455 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.576468 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.576486 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.576498 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.610670 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.610796 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.610808 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.610820 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.610828 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[5024]: E1128 16:59:41.629012 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.634436 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.634698 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.634906 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.635236 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.635290 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[5024]: E1128 16:59:41.658424 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.662420 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.662469 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.662482 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.662503 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.662521 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[5024]: E1128 16:59:41.681180 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.686953 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.686999 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.687008 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.687048 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.687063 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[5024]: E1128 16:59:41.708257 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.713210 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.713268 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.713279 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.713304 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.713317 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[5024]: E1128 16:59:41.731660 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[5024]: E1128 16:59:41.731843 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.734632 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.734719 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.734731 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.734752 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.734769 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.839121 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.839167 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.839179 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.839196 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.839209 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.943641 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.943680 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.943689 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.943705 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[5024]: I1128 16:59:41.943716 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.046506 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.046566 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.046579 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.046599 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.046614 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.148975 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.149053 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.149070 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.149088 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.149099 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.251788 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.251844 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.251858 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.251877 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.251893 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.356948 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.357048 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.357064 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.357088 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.357109 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.460480 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.460538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.460558 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.460580 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.460592 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.497361 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:42 crc kubenswrapper[5024]: E1128 16:59:42.497536 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.563561 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.563608 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.563627 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.563649 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.563664 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.666565 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.666627 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.666644 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.666668 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.666684 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.769568 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.769612 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.769620 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.769637 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.769648 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.872524 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.872601 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.872615 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.872636 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.872650 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.975193 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.975250 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.975264 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.975285 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[5024]: I1128 16:59:42.975299 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.077836 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.077911 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.077923 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.077945 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.077957 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.181631 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.181692 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.181708 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.181731 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.181744 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.285253 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.285315 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.285327 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.285349 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.285362 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.388972 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.389071 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.389086 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.389109 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.389128 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.492975 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.493065 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.493079 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.493103 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.493117 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.497246 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.497257 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.497366 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.497490 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.497530 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.497787 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.584742 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.585173 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:47.585123055 +0000 UTC m=+149.634044000 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.596249 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.596322 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.596347 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.596379 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.596398 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.685505 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.685567 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.685713 5024 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.685761 5024 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.685819 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 17:00:47.685795484 +0000 UTC m=+149.734716399 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.685848 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 17:00:47.685835245 +0000 UTC m=+149.734756170 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.699867 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.699932 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.699945 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.699965 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.699977 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.787206 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.787323 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.787554 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.787581 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.787596 5024 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.787624 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.787699 5024 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.787715 5024 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.787673 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 17:00:47.787655018 +0000 UTC m=+149.836575923 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:43 crc kubenswrapper[5024]: E1128 16:59:43.787889 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 17:00:47.787840113 +0000 UTC m=+149.836761038 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.803119 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.803183 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.803196 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.803217 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.803228 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.905999 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.906077 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.906093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.906113 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[5024]: I1128 16:59:43.906130 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.009130 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.009185 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.009202 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.009227 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.009246 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.112694 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.112741 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.112751 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.112772 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.112787 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.215882 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.215934 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.215944 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.215963 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.215974 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.318827 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.318873 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.318885 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.318905 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.318916 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.422287 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.422350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.422362 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.422382 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.422395 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.497849 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:44 crc kubenswrapper[5024]: E1128 16:59:44.498052 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.499044 5024 scope.go:117] "RemoveContainer" containerID="98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.526075 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.526131 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.526141 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.526160 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.526172 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.629441 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.629843 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.629857 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.629879 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.629890 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.733811 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.733880 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.733891 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.733913 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.733926 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.837687 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.837888 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.837923 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.837956 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.837979 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.941415 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.941473 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.941483 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.941503 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[5024]: I1128 16:59:44.941517 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.044770 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.044820 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.044838 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.044856 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.044872 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.148379 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.148443 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.148461 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.148487 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.148504 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.252382 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.252439 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.252455 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.252475 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.252489 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.355778 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.355840 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.355853 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.355878 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.355891 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.459320 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.459409 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.459423 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.459451 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.459465 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.497206 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.497261 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.497314 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:45 crc kubenswrapper[5024]: E1128 16:59:45.497403 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:45 crc kubenswrapper[5024]: E1128 16:59:45.497660 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:45 crc kubenswrapper[5024]: E1128 16:59:45.497920 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.563201 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.563245 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.563254 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.563270 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.563281 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.666118 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.666164 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.666174 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.666190 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.666202 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.769218 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.769269 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.769279 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.769297 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.769306 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.872183 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.872225 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.872235 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.872249 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.872259 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.975339 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.975405 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.975420 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.975449 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[5024]: I1128 16:59:45.975464 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.192472 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.192517 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.192526 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.192543 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.192554 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.275247 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/2.log" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.277838 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerStarted","Data":"3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd"} Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.278462 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.295450 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.295494 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.295505 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.295523 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.295535 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.297587 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.314296 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.333676 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.350209 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.370706 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.392466 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcf1223db1eb698e609489771d1fd1fd040bb9f4df3b4d69e38e8f827ee2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:30Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08\\\\n2025-11-28T16:58:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08 to /host/opt/cni/bin/\\\\n2025-11-28T16:58:45Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:45Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.398935 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.398971 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.398980 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.398996 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.399006 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.414623 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.440576 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:17Z\\\",\\\"message\\\":\\\"5415 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g after 0 failed attempt(s)\\\\nI1128 16:59:17.555421 6669 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 16:59:17.555430 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 16:59:17.555441 6669 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1128 16:59:17.555452 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 16:59:17.555136 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1128 16:59:17.555455 6669 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.456318 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.484849 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adb7f39-adfc-4b19-ade8-cb5e4cabab18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2752c5873cb62269bfe3ede5bf8d88d306ced5c6e198a0b96c3f8d3748c0f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f805b89004d6feac3504587239ede0386e63f5776fbecaf2ae4e397a2e9b7b4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126f470b7087ee944c80851edeee88ae97a89b1fa710a522d6ff2cb4710f983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57347508de49dbce7e1fb1f625993ba3c9676820588c2cbe4ebbc54d0e7a46db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66028a7f2194d675fd52778ac8ffa00b749e3e2272df93fa1ae4500705d2a409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db541b40512a9d8af0105395534bcce4ebbeb5f1bf45280c0afc64946f033e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db541b40512a9d8af0105395534bcce4ebbeb5f1bf45280c0afc64946f033e05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://790c6d25e5e108d1497005cbd1a08df6664d2f05922e99f939e0e31299853016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c6d25e5e108d1497005cbd1a08df6664d2f05922e99f939e0e31299853016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0eb0e257310f5b971f5bbd292aab98bdb0afedbeb38ab6edcd5003b51a96dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0eb0e257310f5b971f5bbd292aab98bdb0afedbeb38ab6edcd5003b51a96dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.497733 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:46 crc kubenswrapper[5024]: E1128 16:59:46.497941 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.501396 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.501448 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.501500 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.501532 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.501545 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.504662 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.530584 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.547937 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.576391 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.595228 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56755187-a7bb-4aab-bd0f-4fb1e7c81d66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.604515 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.604596 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.604614 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.604642 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.604660 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.616663 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.635557 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.648842 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.707720 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.707779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.707791 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.707811 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.707827 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.810785 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.810846 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.810873 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.810900 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.810916 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.914445 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.914529 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.914550 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.914649 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[5024]: I1128 16:59:46.914669 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.018701 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.018767 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.018785 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.018811 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.018826 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.122566 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.122626 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.122639 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.122660 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.122673 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.226759 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.226824 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.226837 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.226858 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.226874 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.284246 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/3.log" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.285390 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/2.log" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.289879 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerID="3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd" exitCode=1 Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.289950 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerDied","Data":"3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd"} Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.290037 5024 scope.go:117] "RemoveContainer" containerID="98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.291087 5024 scope.go:117] "RemoveContainer" containerID="3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd" Nov 28 16:59:47 crc kubenswrapper[5024]: E1128 16:59:47.291409 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.319249 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.330805 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.330876 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.330885 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.330906 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.330918 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.341250 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.359794 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.378734 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.401737 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.432563 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98c437fe3f2091a5ea304da5f103662bc04a41c9f3811d506df54b03aaf7a6d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:17Z\\\",\\\"message\\\":\\\"5415 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g after 0 failed attempt(s)\\\\nI1128 16:59:17.555421 6669 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1128 16:59:17.555430 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI1128 16:59:17.555441 6669 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1128 16:59:17.555452 6669 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1128 16:59:17.555136 6669 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1128 16:59:17.555455 6669 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"message\\\":\\\"ateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1128 16:59:46.792538 7044 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI1128 16:59:46.792550 7044 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI1128 16:59:46.792558 7044 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1128 16:59:46.792579 7044 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"e4e4203e-87c7-4024-930a-5d6bdfe2bdde\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterL\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.434313 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.434378 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.434393 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.434418 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.434443 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.449006 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.465640 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcf1223db1eb698e609489771d1fd1fd040bb9f4df3b4d69e38e8f827ee2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:30Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08\\\\n2025-11-28T16:58:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08 to /host/opt/cni/bin/\\\\n2025-11-28T16:58:45Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:45Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.482673 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.497571 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.497627 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.497665 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:47 crc kubenswrapper[5024]: E1128 16:59:47.497760 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.497845 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: E1128 16:59:47.498121 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:47 crc kubenswrapper[5024]: E1128 16:59:47.498186 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.514720 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.515243 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.537721 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.537760 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.537773 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.537792 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.537806 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.550236 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adb7f39-adfc-4b19-ade8-cb5e4cabab18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2752c5873cb62269bfe3ede5bf8d88d306ced5c6e198a0b96c3f8d3748c0f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f805b89004d6feac3504587239ede0386e63f5776fbecaf2ae4e397a2e9b7b4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126f470b7087ee944c80851edeee88ae97a89b1fa710a522d6ff2cb4710f983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57347508de49dbce7e1fb1f625993ba3c9676820588c2cbe4ebbc54d0e7a46db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66028a7f2194d675fd52778ac8ffa00b749e3e2272df93fa1ae4500705d2a409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db541b40512a9d8af0105395534bcce4ebbeb5f1bf45280c0afc64946f033e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db541b40512a9d8af0105395534bcce4ebbeb5f1bf45280c0afc64946f033e05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://790c6d25e5e108d1497005cbd1a08df6664d2f05922e99f939e0e31299853016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c6d25e5e108d1497005cbd1a08df6664d2f05922e99f939e0e31299853016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0eb0e257310f5b971f5bbd292aab98bdb0afedbeb38ab6edcd5003b51a96dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0eb0e257310f5b971f5bbd292aab98bdb0afedbeb38ab6edcd5003b51a96dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.570189 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.586434 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.599947 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.613565 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56755187-a7bb-4aab-bd0f-4fb1e7c81d66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.629107 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.641284 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.641350 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.641362 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.641387 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.641401 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.645609 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:47Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.745818 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.745892 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.745909 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.745933 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.745954 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.849490 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.849967 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.850081 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.850178 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.850244 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.954638 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.954731 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.954762 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.954793 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[5024]: I1128 16:59:47.954812 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.063448 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.063520 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.063541 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.063565 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.063585 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.166869 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.166915 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.166929 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.166949 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.166962 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.270728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.270813 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.270828 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.270856 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.270870 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.297235 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/3.log" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.303068 5024 scope.go:117] "RemoveContainer" containerID="3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd" Nov 28 16:59:48 crc kubenswrapper[5024]: E1128 16:59:48.303255 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.319345 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.345821 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adb7f39-adfc-4b19-ade8-cb5e4cabab18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2752c5873cb62269bfe3ede5bf8d88d306ced5c6e198a0b96c3f8d3748c0f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f805b89004d6feac3504587239ede0386e63f5776fbecaf2ae4e397a2e9b7b4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126f470b7087ee944c80851edeee88ae97a89b1fa710a522d6ff2cb4710f983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57347508de49dbce7e1fb1f625993ba3c9676820588c2cbe4ebbc54d0e7a46db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66028a7f2194d675fd52778ac8ffa00b749e3e2272df93fa1ae4500705d2a409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db541b40512a9d8af0105395534bcce4ebbeb5f1bf45280c0afc64946f033e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db541b40512a9d8af0105395534bcce4ebbeb5f1bf45280c0afc64946f033e05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://790c6d25e5e108d1497005cbd1a08df6664d2f05922e99f939e0e31299853016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c6d25e5e108d1497005cbd1a08df6664d2f05922e99f939e0e31299853016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0eb0e257310f5b971f5bbd292aab98bdb0afedbeb38ab6edcd5003b51a96dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0eb0e257310f5b971f5bbd292aab98bdb0afedbeb38ab6edcd5003b51a96dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.362811 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.373704 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.373742 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.373752 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.373772 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.373788 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.386822 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.405165 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.423977 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.439261 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56755187-a7bb-4aab-bd0f-4fb1e7c81d66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.462388 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.476903 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.476982 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.477001 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.477046 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.477067 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.481049 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.496753 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.497361 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:48 crc kubenswrapper[5024]: E1128 16:59:48.497581 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.521124 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.536824 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.550031 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.565476 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.578623 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210df6e3-539a-4a22-b118-7d0cd5f01bba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155da67f291f7b2b01e88f859d0c5e8dad924363c72e0cbba9dbaec899a6f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0213b699dc472ae7febacb8dce2ddb542e70dc307b3a6191c20f22a7164a4f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0213b699dc472ae7febacb8dce2ddb542e70dc307b3a6191c20f22a7164a4f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.580156 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.580191 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.580203 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.580222 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.580235 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.595782 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.616118 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcf1223db1eb698e609489771d1fd1fd040bb9f4df3b4d69e38e8f827ee2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:30Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08\\\\n2025-11-28T16:58:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08 to /host/opt/cni/bin/\\\\n2025-11-28T16:58:45Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:45Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.632155 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.654148 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"message\\\":\\\"ateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1128 16:59:46.792538 7044 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI1128 16:59:46.792550 7044 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI1128 16:59:46.792558 7044 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1128 16:59:46.792579 7044 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"e4e4203e-87c7-4024-930a-5d6bdfe2bdde\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterL\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.669112 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.683250 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.683305 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.683316 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.683334 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.683348 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.688250 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.707810 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56755187-a7bb-4aab-bd0f-4fb1e7c81d66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.730555 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.746875 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.764536 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.781452 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.787093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.787161 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.787182 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.787201 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.787213 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.797334 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.814114 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcf1223db1eb698e609489771d1fd1fd040bb9f4df3b4d69e38e8f827ee2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:30Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08\\\\n2025-11-28T16:58:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08 to /host/opt/cni/bin/\\\\n2025-11-28T16:58:45Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:45Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.832039 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.855748 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"message\\\":\\\"ateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1128 16:59:46.792538 7044 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI1128 16:59:46.792550 7044 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI1128 16:59:46.792558 7044 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1128 16:59:46.792579 7044 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"e4e4203e-87c7-4024-930a-5d6bdfe2bdde\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterL\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.875667 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210df6e3-539a-4a22-b118-7d0cd5f01bba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155da67f291f7b2b01e88f859d0c5e8dad924363c72e0cbba9dbaec899a6f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0213b699dc472ae7febacb8dce2ddb542e70dc307b3a6191c20f22a7164a4f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0213b699dc472ae7febacb8dce2ddb542e70dc307b3a6191c20f22a7164a4f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.890520 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.890589 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.890603 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.890624 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.890638 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.892664 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.909718 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.930628 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.954136 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.974316 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.994806 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.994874 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.994888 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.994909 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[5024]: I1128 16:59:48.994923 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.002363 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adb7f39-adfc-4b19-ade8-cb5e4cabab18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2752c5873cb62269bfe3ede5bf8d88d306ced5c6e198a0b96c3f8d3748c0f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f805b89004d6feac3504587239ede0386e63f5776fbecaf2ae4e397a2e9b7b4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126f470b7087ee944c80851edeee88ae97a89b1fa710a522d6ff2cb4710f983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57347508de49dbce7e1fb1f625993ba3c9676820588c2cbe4ebbc54d0e7a46db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66028a7f2194d675fd52778ac8ffa00b749e3e2272df93fa1ae4500705d2a409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db541b40512a9d8af0105395534bcce4ebbeb5f1bf45280c0afc64946f033e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db541b40512a9d8af0105395534bcce4ebbeb5f1bf45280c0afc64946f033e05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://790c6d25e5e108d1497005cbd1a08df6664d2f05922e99f939e0e31299853016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c6d25e5e108d1497005cbd1a08df6664d2f05922e99f939e0e31299853016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0eb0e257310f5b971f5bbd292aab98bdb0afedbeb38ab6edcd5003b51a96dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0eb0e257310f5b971f5bbd292aab98bdb0afedbeb38ab6edcd5003b51a96dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:48Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.023227 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.098619 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.098682 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.098697 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.098721 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.098736 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.202173 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.202247 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.202273 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.202294 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.202305 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.305492 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.305538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.305549 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.305567 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.305578 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.408754 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.408808 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.408819 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.408835 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.408846 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.498002 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.498002 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:49 crc kubenswrapper[5024]: E1128 16:59:49.498214 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.498298 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:49 crc kubenswrapper[5024]: E1128 16:59:49.498352 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:49 crc kubenswrapper[5024]: E1128 16:59:49.498586 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.513099 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.513184 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.513198 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.513221 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.513237 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.615974 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.616103 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.616434 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.616710 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.616981 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.719394 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.719441 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.719452 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.719470 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.719481 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.823282 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.823354 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.823398 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.823433 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.823457 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.927207 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.927272 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.927285 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.927310 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[5024]: I1128 16:59:49.927325 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.031683 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.031774 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.031801 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.031843 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.031867 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.134646 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.134694 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.134706 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.134741 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.134756 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.237794 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.237866 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.237888 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.237913 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.238073 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.341488 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.341580 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.341604 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.341641 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.341665 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.444771 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.444859 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.444885 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.445064 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.445113 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.497348 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:50 crc kubenswrapper[5024]: E1128 16:59:50.497544 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.553496 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.553556 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.553759 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.553778 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.553790 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.657997 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.658115 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.658149 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.658191 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.658217 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.761351 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.761408 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.761426 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.761452 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.761470 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.864756 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.864818 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.864836 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.864866 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.864883 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.969012 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.969153 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.969176 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.969207 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[5024]: I1128 16:59:50.969226 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.073408 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.073466 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.073479 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.073500 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.073517 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.177288 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.177340 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.177352 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.177371 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.177397 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.280573 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.280644 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.280697 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.280720 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.280733 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.384696 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.384761 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.384774 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.384800 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.384817 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.487858 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.487922 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.487941 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.487961 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.487977 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.497305 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.497346 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.497419 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:51 crc kubenswrapper[5024]: E1128 16:59:51.497431 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:51 crc kubenswrapper[5024]: E1128 16:59:51.497543 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:51 crc kubenswrapper[5024]: E1128 16:59:51.497647 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.591047 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.591103 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.591119 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.591143 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.591162 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.693705 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.693770 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.693783 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.693804 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.693817 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.797242 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.797302 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.797315 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.797336 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.797347 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.865827 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.865895 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.865908 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.865930 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.865947 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[5024]: E1128 16:59:51.885274 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.889834 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.889886 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.889900 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.889920 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.889932 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[5024]: E1128 16:59:51.903771 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.909265 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.909310 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.909321 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.909339 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.909352 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[5024]: E1128 16:59:51.923371 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.928043 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.928099 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.928109 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.928129 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.928143 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[5024]: E1128 16:59:51.943242 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.946871 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.946917 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.946929 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.946947 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.946960 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[5024]: E1128 16:59:51.960866 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[5024]: E1128 16:59:51.960983 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.962736 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.962759 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.962768 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.962783 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[5024]: I1128 16:59:51.962794 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.066773 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.066847 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.066871 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.066903 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.066926 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.169725 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.169767 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.169846 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.169868 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.169887 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.273291 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.273335 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.273363 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.273382 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.273392 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.376164 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.376248 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.376279 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.376308 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.376328 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.479438 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.479494 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.479508 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.479525 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.479538 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.497096 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:52 crc kubenswrapper[5024]: E1128 16:59:52.497264 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.583224 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.583324 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.583341 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.583640 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.583662 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.688710 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.688789 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.688806 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.688831 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.688861 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.791907 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.791968 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.791978 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.791997 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.792007 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.895340 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.895400 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.895417 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.895492 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.895511 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.999118 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.999160 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.999171 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.999191 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[5024]: I1128 16:59:52.999205 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.102062 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.102098 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.102106 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.102121 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.102131 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.204606 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.204714 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.204725 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.204740 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.204750 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.307908 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.307979 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.307994 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.308031 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.308044 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.411250 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.411284 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.411292 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.411309 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.411320 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.497682 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.497935 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.498001 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:53 crc kubenswrapper[5024]: E1128 16:59:53.498169 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:53 crc kubenswrapper[5024]: E1128 16:59:53.498291 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:53 crc kubenswrapper[5024]: E1128 16:59:53.498363 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.513728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.513786 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.513803 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.513839 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.513861 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.616628 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.616678 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.616688 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.616705 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.616717 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.719697 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.719753 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.719762 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.719786 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.719799 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.822698 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.822749 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.822761 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.822779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.822793 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.925862 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.925934 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.925951 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.925977 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[5024]: I1128 16:59:53.925995 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.028677 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.028736 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.028749 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.028769 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.028781 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.133857 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.133922 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.133939 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.133960 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.133977 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.237326 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.237371 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.237381 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.237438 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.237449 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.339986 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.340084 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.340102 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.340126 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.340144 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.443066 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.443126 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.443135 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.443151 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.443162 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.497601 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:54 crc kubenswrapper[5024]: E1128 16:59:54.497833 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.546574 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.546660 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.546670 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.546688 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.546701 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.649154 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.649205 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.649215 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.649232 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.649241 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.752173 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.752264 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.752289 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.752322 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.752346 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.856164 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.856254 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.856271 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.856297 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.856313 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.958758 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.958818 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.958838 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.958858 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[5024]: I1128 16:59:54.958871 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.062501 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.062554 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.062567 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.062585 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.062596 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.166153 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.166213 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.166225 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.166242 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.166254 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.269533 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.269599 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.269755 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.269786 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.269805 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.373141 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.373214 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.373230 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.373252 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.373266 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.476908 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.476967 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.476982 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.477000 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.477011 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.497368 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.497368 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:55 crc kubenswrapper[5024]: E1128 16:59:55.497546 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.497404 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:55 crc kubenswrapper[5024]: E1128 16:59:55.497668 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:55 crc kubenswrapper[5024]: E1128 16:59:55.497743 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.580898 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.580964 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.580978 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.581000 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.581015 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.683949 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.684012 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.684028 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.684066 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.684080 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.787433 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.787493 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.787504 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.787527 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.787540 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.890168 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.890214 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.890225 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.890240 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.890251 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.993799 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.993841 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.993853 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.993869 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[5024]: I1128 16:59:55.993881 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.096541 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.096593 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.096606 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.096627 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.096639 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.200310 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.200397 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.200414 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.200440 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.200460 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.303314 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.303386 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.303406 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.303433 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.303455 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.407388 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.407529 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.407544 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.407563 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.407574 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.497830 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:56 crc kubenswrapper[5024]: E1128 16:59:56.498162 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.510778 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.510843 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.510870 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.510900 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.510924 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.614001 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.614053 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.614090 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.614110 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.614120 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.716779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.716829 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.716839 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.716855 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.716864 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.819483 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.819578 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.819614 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.819647 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.819670 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.922561 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.922624 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.922644 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.922670 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[5024]: I1128 16:59:56.922688 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.026090 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.026201 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.026224 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.026254 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.026272 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.129534 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.129597 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.129610 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.129630 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.129645 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.233205 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.233268 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.233280 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.233298 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.233312 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.336380 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.336430 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.336439 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.336459 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.336472 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.440010 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.440084 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.440098 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.440120 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.440135 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.497628 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.497696 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.497643 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:57 crc kubenswrapper[5024]: E1128 16:59:57.497855 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:57 crc kubenswrapper[5024]: E1128 16:59:57.497989 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:57 crc kubenswrapper[5024]: E1128 16:59:57.498417 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.544093 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.544145 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.544155 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.544175 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.544188 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.648010 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.648119 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.648129 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.648148 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.648161 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.751324 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.751380 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.751390 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.751407 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.751421 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.853898 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.853944 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.853954 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.853972 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.853983 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.958220 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.958275 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.958286 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.958304 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[5024]: I1128 16:59:57.958315 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.060775 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.060826 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.060839 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.060855 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.060864 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.163815 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.163861 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.163879 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.163899 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.163916 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.267008 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.267109 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.267125 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.267147 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.267163 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.374515 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.374603 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.374625 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.374656 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.374686 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.478761 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.478828 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.478846 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.478869 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.478890 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.497929 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:58 crc kubenswrapper[5024]: E1128 16:59:58.498203 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.534247 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5adb7f39-adfc-4b19-ade8-cb5e4cabab18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2752c5873cb62269bfe3ede5bf8d88d306ced5c6e198a0b96c3f8d3748c0f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f805b89004d6feac3504587239ede0386e63f5776fbecaf2ae4e397a2e9b7b4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b126f470b7087ee944c80851edeee88ae97a89b1fa710a522d6ff2cb4710f983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57347508de49dbce7e1fb1f625993ba3c9676820588c2cbe4ebbc54d0e7a46db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66028a7f2194d675fd52778ac8ffa00b749e3e2272df93fa1ae4500705d2a409\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db541b40512a9d8af0105395534bcce4ebbeb5f1bf45280c0afc64946f033e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db541b40512a9d8af0105395534bcce4ebbeb5f1bf45280c0afc64946f033e05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://790c6d25e5e108d1497005cbd1a08df6664d2f05922e99f939e0e31299853016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://790c6d25e5e108d1497005cbd1a08df6664d2f05922e99f939e0e31299853016\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b0eb0e257310f5b971f5bbd292aab98bdb0afedbeb38ab6edcd5003b51a96dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0eb0e257310f5b971f5bbd292aab98bdb0afedbeb38ab6edcd5003b51a96dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.553863 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1203765f-4dc5-4d8f-8b27-3dfb23024d61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed458d0245fb7a5bb0fecbebe707cbd82282b6400b2987123d7a817b07b4f67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://570276e01d3b6053b9ed678072ebe9aefd06649f938534b9eb28dbfce7a61c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbd97516c011d2409d20421274ee29e057e765ef360fe1c0357453d0148a1525\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.569123 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.583286 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.583329 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.583341 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.583360 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.583328 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df51e0d3a36daa188d0c0c9e9998c5e30fd822fda8e9d805fafb3c3418a0b57f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236a47f6184931442f75ed8018af71548f75caac6e854256c76efc9d9ddece9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.583377 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.596775 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.609069 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"949e234b-60b0-40e4-a423-0596dafd56c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwpwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5t4kc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.621260 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56755187-a7bb-4aab-bd0f-4fb1e7c81d66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534d9bc3c0d963bc16b3f845423d1e02cbf7d7cc16571aeae544f8b103a051fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2a8b1052134d1060a9a13e20cf0a4913c36a553774d305b1061722c0626da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58bf3ddbf898dd905efbc087baa80ba9a9f4a93ed305f3aa8934f875abcb4216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5232ba51dd73b53be9ca9cb5c2070a2aaeb48780da40e7abbb1fe1480ba06018\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.635735 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f0bfe7-ae68-4218-b0ca-735fa4098f1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:38Z\\\",\\\"message\\\":\\\"lhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764349101\\\\\\\\\\\\\\\" (2025-11-28 16:58:20 +0000 UTC to 2025-12-28 16:58:21 +0000 UTC (now=2025-11-28 16:58:38.648628159 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648781 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764349112\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764349112\\\\\\\\\\\\\\\" (2025-11-28 15:58:31 +0000 UTC to 2026-11-28 15:58:31 +0000 UTC (now=2025-11-28 16:58:38.648763153 +0000 UTC))\\\\\\\"\\\\nI1128 16:58:38.648801 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1128 16:58:38.648832 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1128 16:58:38.648857 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648890 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1128 16:58:38.648924 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3201983412/tls.crt::/tmp/serving-cert-3201983412/tls.key\\\\\\\"\\\\nI1128 16:58:38.649060 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1128 16:58:38.650975 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1128 16:58:38.650997 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1128 16:58:38.651011 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1128 16:58:38.651015 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nF1128 16:58:38.652207 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.650251 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122f68034ce162e2379932dbca67bd2cbc9b32f3d9fe233868233235af699c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.660251 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7lvcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc4fee7b-b7f6-48fc-98a4-4b360515a817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e0b04798c0a91a1c74f162e540f8c72898a6094af99c65515a8b7f65f01eb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7lvcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.672854 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.684766 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c16caf1573eb96b54325ebb7a839d9bebe09916c422fe285320503893845296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.685799 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.685830 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.685841 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.685856 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.685866 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.694528 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rcqbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1b656d6-b82b-43ff-ad36-f9ed63e26031\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a44d449d2cfe59e6fcf8dcc850c110ce24f27f67c1f0b9a57a1f75910a49e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p5gw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:42Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rcqbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.705385 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda6719c-a2bb-4a93-bafe-3118fb33bb19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007bc631d8009ebd16f302d8b501e77cf6bf1b47be66f47f638bdfdca0612d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011e7cfe76cafa902ef108d0d6f964b2d161f65b61d00322530e343f159d815e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c2xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-h4h4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.715116 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210df6e3-539a-4a22-b118-7d0cd5f01bba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://155da67f291f7b2b01e88f859d0c5e8dad924363c72e0cbba9dbaec899a6f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0213b699dc472ae7febacb8dce2ddb542e70dc307b3a6191c20f22a7164a4f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0213b699dc472ae7febacb8dce2ddb542e70dc307b3a6191c20f22a7164a4f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.726565 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77bf51a4-547d-4a7b-b841-59f4fbacbd97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba22158c1871746a88c790188fb56780fce8402f68c1a73234eac89ede8d6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc84d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ps8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.739396 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4vh86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97cac632-c692-414d-b0cf-605f0bb7719b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fddcf1223db1eb698e609489771d1fd1fd040bb9f4df3b4d69e38e8f827ee2b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:30Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08\\\\n2025-11-28T16:58:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6b91f0e-4dfd-44e4-bff5-136ab64d1d08 to /host/opt/cni/bin/\\\\n2025-11-28T16:58:45Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:45Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mwg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4vh86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.755553 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ttb72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afb0c264-2fb7-436d-9afa-07e208efebd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://007f21fff3bfe0a940097dcf61d987c39cbac0a34995960e706aef21e8838af0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4914a2e7d2859773ccc225272d31aaf5470484d000b4f641ed0e5e805c3ea1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7cbf1faddaf2258656d64ef7f1012bffa5f09e81dfb2146b20f18bd2821224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df3c97b1b18457fd04c5ff7b3d3818790ba938a6f1363dc7a5593606b4c44d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea608b21836d19d723d4d4801454e932f5bc39e613093de6c3706b13407050f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630ee5d4cf058d87f59d2a3114ef0a95ed3673179f49949e5d30966e4969aa6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fc079f3adb8e715f844b126e1b6a900f9ee3cd12c4052bf7a3a1c5b2d412bac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2czsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ttb72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.779488 5024 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1542ec-e582-404b-8649-4a2a3e6ac398\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"message\\\":\\\"ateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1128 16:59:46.792538 7044 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI1128 16:59:46.792550 7044 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI1128 16:59:46.792558 7044 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1128 16:59:46.792579 7044 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"e4e4203e-87c7-4024-930a-5d6bdfe2bdde\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterL\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvvzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-b2gbm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.788111 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.788174 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.788187 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.788209 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.788224 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.892059 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.892130 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.892142 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.892164 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.892175 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.995791 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.995847 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.995857 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.995878 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[5024]: I1128 16:59:58.995890 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.040668 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs\") pod \"network-metrics-daemon-5t4kc\" (UID: \"949e234b-60b0-40e4-a423-0596dafd56c1\") " pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 16:59:59 crc kubenswrapper[5024]: E1128 16:59:59.040898 5024 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:59 crc kubenswrapper[5024]: E1128 16:59:59.040965 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs podName:949e234b-60b0-40e4-a423-0596dafd56c1 nodeName:}" failed. No retries permitted until 2025-11-28 17:01:03.040947337 +0000 UTC m=+165.089868242 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs") pod "network-metrics-daemon-5t4kc" (UID: "949e234b-60b0-40e4-a423-0596dafd56c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.098779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.098817 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.098830 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.098849 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.098860 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.201657 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.201713 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.201726 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.201747 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.201759 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.304354 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.304401 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.304413 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.304431 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.304443 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.407450 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.407512 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.407529 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.407551 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.407564 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.497413 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.497491 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.497525 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:59 crc kubenswrapper[5024]: E1128 16:59:59.497597 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:59 crc kubenswrapper[5024]: E1128 16:59:59.497732 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:59 crc kubenswrapper[5024]: E1128 16:59:59.497905 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.510736 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.510777 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.510786 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.510801 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.510810 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.614256 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.614316 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.614330 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.614351 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.614363 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.718070 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.718149 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.718167 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.718196 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.718215 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.821249 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.821317 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.821333 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.821361 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.821378 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.925402 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.925446 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.925456 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.925475 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[5024]: I1128 16:59:59.925486 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.028243 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.028299 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.028309 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.028331 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.028342 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.132228 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.132309 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.132348 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.132367 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.132377 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.235155 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.235211 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.235223 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.235241 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.235255 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.338629 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.338686 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.338696 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.338714 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.338724 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.441694 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.441742 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.441756 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.441778 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.441793 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.498063 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:00 crc kubenswrapper[5024]: E1128 17:00:00.498516 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.498848 5024 scope.go:117] "RemoveContainer" containerID="3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd" Nov 28 17:00:00 crc kubenswrapper[5024]: E1128 17:00:00.499172 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.544852 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.544912 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.544924 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.544943 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.544955 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.648386 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.648437 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.648447 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.648466 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.648478 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.751690 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.751754 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.751770 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.751793 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.751809 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.855099 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.855150 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.855165 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.855184 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.855195 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.958664 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.958758 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.958772 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.958793 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[5024]: I1128 17:00:00.958804 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.061829 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.061884 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.061897 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.061922 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.061934 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.165582 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.165646 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.165659 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.165684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.165698 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.268743 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.268806 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.268819 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.268840 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.268854 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.372419 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.372492 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.372508 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.372528 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.372542 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.475647 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.475701 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.475713 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.475738 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.475752 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.497283 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.497493 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.497553 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:01 crc kubenswrapper[5024]: E1128 17:00:01.497714 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:01 crc kubenswrapper[5024]: E1128 17:00:01.497842 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:01 crc kubenswrapper[5024]: E1128 17:00:01.497905 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.579354 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.579406 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.579418 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.579436 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.579448 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.681530 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.681576 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.681589 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.681608 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.681624 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.784431 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.784483 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.784493 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.784513 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.784526 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.887671 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.887748 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.887759 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.887779 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.887792 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.991237 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.991298 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.991312 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.991335 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[5024]: I1128 17:00:01.991351 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.095709 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.095760 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.095774 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.095795 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.095810 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.199270 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.199319 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.199334 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.199354 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.199368 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.302890 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.302943 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.302956 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.302975 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.302986 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.347228 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.347296 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.347314 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.347343 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.347362 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[5024]: E1128 17:00:02.370932 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:02Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.378675 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.378762 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.378786 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.378826 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.378849 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[5024]: E1128 17:00:02.403833 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:02Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.410586 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.410657 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.410681 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.410717 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.410842 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[5024]: E1128 17:00:02.434514 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:02Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.440783 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.440832 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.440845 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.440868 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.440882 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[5024]: E1128 17:00:02.460544 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:02Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.466226 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.466319 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.466352 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.466381 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.466404 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[5024]: E1128 17:00:02.487844 5024 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e109ddab-de02-41b4-a5ee-6ddddeff5610\\\",\\\"systemUUID\\\":\\\"fe25c19c-2a8b-43d8-b80c-708649046fac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:02Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:02 crc kubenswrapper[5024]: E1128 17:00:02.487999 5024 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.490019 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.490086 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.490100 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.490121 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.490135 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.497664 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:02 crc kubenswrapper[5024]: E1128 17:00:02.498067 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.593062 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.593112 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.593135 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.593154 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.593166 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.696722 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.696763 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.696774 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.696793 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.696806 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.800076 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.800151 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.800170 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.800198 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.800218 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.902726 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.902778 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.902789 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.902808 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[5024]: I1128 17:00:02.902819 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.005687 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.005760 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.005775 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.005792 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.005804 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.108899 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.108953 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.108967 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.108985 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.108995 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.212538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.212601 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.212672 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.212700 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.212711 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.315713 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.315770 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.315783 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.315808 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.315820 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.420292 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.420354 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.420380 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.420412 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.420437 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.498171 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.498229 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.498208 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:03 crc kubenswrapper[5024]: E1128 17:00:03.498393 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:03 crc kubenswrapper[5024]: E1128 17:00:03.498518 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:03 crc kubenswrapper[5024]: E1128 17:00:03.498645 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.523579 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.523616 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.523628 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.523677 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.523693 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.628494 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.628574 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.628591 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.628617 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.628633 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.731934 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.731993 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.732004 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.732021 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.732052 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.834334 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.834374 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.834383 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.834398 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.834407 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.937391 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.937468 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.937486 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.937515 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[5024]: I1128 17:00:03.937533 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.040969 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.041059 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.041072 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.041111 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.041124 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.143440 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.143491 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.143502 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.143522 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.143536 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.246011 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.246076 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.246086 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.246103 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.246114 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.348896 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.348964 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.348975 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.348995 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.349011 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.452120 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.452174 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.452185 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.452202 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.452213 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.497042 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:04 crc kubenswrapper[5024]: E1128 17:00:04.497194 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.554887 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.554940 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.554953 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.554972 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.554983 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.658064 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.658132 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.658148 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.658179 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.658197 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.761752 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.761822 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.761838 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.761864 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.761881 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.865616 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.865665 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.865675 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.865695 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.865708 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.968149 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.968200 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.968211 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.968229 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[5024]: I1128 17:00:04.968242 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.072132 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.072198 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.072213 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.072237 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.072257 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.175010 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.175102 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.175118 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.175139 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.175152 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.277783 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.277841 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.277852 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.277869 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.277879 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.380575 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.380629 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.380640 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.380659 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.380672 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.483496 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.483628 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.483645 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.483671 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.483726 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.497785 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.497845 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:05 crc kubenswrapper[5024]: E1128 17:00:05.497981 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.498063 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:05 crc kubenswrapper[5024]: E1128 17:00:05.498197 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:05 crc kubenswrapper[5024]: E1128 17:00:05.498481 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.587369 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.587432 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.587455 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.587485 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.587505 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.690759 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.690837 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.690865 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.690898 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.690922 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.794597 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.794674 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.794698 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.794734 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.794757 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.897629 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.897673 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.897682 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.897697 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[5024]: I1128 17:00:05.897708 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.002939 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.002993 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.003002 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.003060 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.003072 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.105460 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.105523 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.105542 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.105564 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.105582 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.210187 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.210292 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.210313 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.210342 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.210376 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.313684 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.313746 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.313762 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.313786 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.313805 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.417755 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.417809 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.417819 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.417838 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.417867 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.497466 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:06 crc kubenswrapper[5024]: E1128 17:00:06.497771 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.522780 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.522867 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.522881 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.522906 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.522922 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.626584 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.626638 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.626646 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.626681 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.626692 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.730172 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.730229 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.730238 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.730258 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.730276 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.833793 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.833890 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.833911 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.833940 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.833960 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.938241 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.938317 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.938342 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.938430 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[5024]: I1128 17:00:06.938465 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.041314 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.041400 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.041425 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.041451 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.041468 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.144761 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.144836 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.144857 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.144888 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.144910 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.248312 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.248670 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.248728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.248764 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.248789 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.352503 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.352590 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.352626 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.352657 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.352680 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.455427 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.455542 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.455574 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.455626 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.455661 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.497128 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.497428 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.497499 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:07 crc kubenswrapper[5024]: E1128 17:00:07.497880 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:07 crc kubenswrapper[5024]: E1128 17:00:07.497977 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:07 crc kubenswrapper[5024]: E1128 17:00:07.497923 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.559506 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.559560 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.559583 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.559607 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.559618 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.662814 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.662858 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.662870 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.662888 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.662900 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.765461 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.765549 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.765574 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.765632 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.765660 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.869391 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.869466 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.869493 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.869526 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.869546 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.973434 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.973558 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.973642 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.973682 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[5024]: I1128 17:00:07.973706 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.077566 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.077647 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.077664 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.077683 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.077694 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.412757 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.412853 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.412895 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.412917 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.412932 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.497069 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:08 crc kubenswrapper[5024]: E1128 17:00:08.497236 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.515631 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.516046 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.516166 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.516479 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.516666 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.552637 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-rcqbr" podStartSLOduration=90.552607156 podStartE2EDuration="1m30.552607156s" podCreationTimestamp="2025-11-28 16:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:08.540654584 +0000 UTC m=+110.589575489" watchObservedRunningTime="2025-11-28 17:00:08.552607156 +0000 UTC m=+110.601528061" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.576343 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-h4h4g" podStartSLOduration=89.576101706 podStartE2EDuration="1m29.576101706s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:08.554574014 +0000 UTC m=+110.603494929" watchObservedRunningTime="2025-11-28 17:00:08.576101706 +0000 UTC m=+110.625022611" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.600217 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podStartSLOduration=89.600193024 podStartE2EDuration="1m29.600193024s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:08.599633458 +0000 UTC m=+110.648554363" watchObservedRunningTime="2025-11-28 17:00:08.600193024 +0000 UTC m=+110.649113929" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.618088 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4vh86" podStartSLOduration=89.61806905 podStartE2EDuration="1m29.61806905s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:08.617666418 +0000 UTC m=+110.666587323" watchObservedRunningTime="2025-11-28 17:00:08.61806905 +0000 UTC m=+110.666989955" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.621443 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.621503 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.621517 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.621538 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.621551 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.638692 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ttb72" podStartSLOduration=89.638659945 podStartE2EDuration="1m29.638659945s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:08.636392558 +0000 UTC m=+110.685313473" watchObservedRunningTime="2025-11-28 17:00:08.638659945 +0000 UTC m=+110.687580850" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.707909 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.70788717 podStartE2EDuration="21.70788717s" podCreationTimestamp="2025-11-28 16:59:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:08.707775856 +0000 UTC m=+110.756696761" watchObservedRunningTime="2025-11-28 17:00:08.70788717 +0000 UTC m=+110.756808075" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.723880 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.723946 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.723961 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.723982 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.723995 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.732545 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.732527534 podStartE2EDuration="1m29.732527534s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:08.731205175 +0000 UTC m=+110.780126080" watchObservedRunningTime="2025-11-28 17:00:08.732527534 +0000 UTC m=+110.781448439" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.815431 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=30.81540989 podStartE2EDuration="30.81540989s" podCreationTimestamp="2025-11-28 16:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:08.814435291 +0000 UTC m=+110.863356216" watchObservedRunningTime="2025-11-28 17:00:08.81540989 +0000 UTC m=+110.864330795" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.827186 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.827235 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.827245 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.827263 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.827274 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.834932 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.834910023 podStartE2EDuration="1m30.834910023s" podCreationTimestamp="2025-11-28 16:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:08.833901813 +0000 UTC m=+110.882822728" watchObservedRunningTime="2025-11-28 17:00:08.834910023 +0000 UTC m=+110.883830938" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.867692 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7lvcw" podStartSLOduration=90.867665516 podStartE2EDuration="1m30.867665516s" podCreationTimestamp="2025-11-28 16:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:08.866943824 +0000 UTC m=+110.915864729" watchObservedRunningTime="2025-11-28 17:00:08.867665516 +0000 UTC m=+110.916586421" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.883158 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=60.88312974 podStartE2EDuration="1m0.88312974s" podCreationTimestamp="2025-11-28 16:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:08.881888674 +0000 UTC m=+110.930809579" watchObservedRunningTime="2025-11-28 17:00:08.88312974 +0000 UTC m=+110.932050665" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.953876 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.953930 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.953941 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.953961 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[5024]: I1128 17:00:08.953970 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.056241 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.056300 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.056311 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.056329 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.056341 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.158757 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.158818 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.158828 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.158848 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.158871 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.262855 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.262968 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.263000 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.263087 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.263127 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.366308 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.366357 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.366369 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.366389 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.366404 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.469460 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.469519 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.469530 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.469549 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.469559 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.498133 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.498188 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.498216 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:09 crc kubenswrapper[5024]: E1128 17:00:09.498394 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:09 crc kubenswrapper[5024]: E1128 17:00:09.498513 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:09 crc kubenswrapper[5024]: E1128 17:00:09.498692 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.572963 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.573069 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.573083 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.573102 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.573117 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.675812 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.675861 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.675875 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.675895 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.675906 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.779493 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.779542 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.779550 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.779568 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.779578 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.881915 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.881970 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.881983 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.882002 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.882015 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.985333 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.985383 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.985394 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.985414 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[5024]: I1128 17:00:09.985426 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.087856 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.087919 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.087931 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.087955 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.087967 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.191162 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.191208 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.191217 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.191233 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.191242 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.293527 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.293570 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.293581 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.293599 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.293611 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.396967 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.397105 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.397131 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.397167 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.397187 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.496956 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:10 crc kubenswrapper[5024]: E1128 17:00:10.497466 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.499650 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.499707 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.499720 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.499735 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.499746 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.602802 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.602874 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.602891 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.602912 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.602924 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.706439 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.706495 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.706507 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.706529 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.706542 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.809240 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.809323 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.809345 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.809375 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.809395 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.913385 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.913456 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.913469 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.913489 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[5024]: I1128 17:00:10.913509 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.015943 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.015999 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.016012 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.016046 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.016058 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.118633 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.118693 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.118708 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.118732 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.118743 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.222628 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.222680 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.222692 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.222711 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.222724 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.326191 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.326271 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.326293 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.326316 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.326329 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.429740 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.429823 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.429848 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.429879 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.429903 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.497254 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.497374 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.497631 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:11 crc kubenswrapper[5024]: E1128 17:00:11.497759 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:11 crc kubenswrapper[5024]: E1128 17:00:11.497867 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:11 crc kubenswrapper[5024]: E1128 17:00:11.497990 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.498065 5024 scope.go:117] "RemoveContainer" containerID="3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd" Nov 28 17:00:11 crc kubenswrapper[5024]: E1128 17:00:11.498231 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.533444 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.533489 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.533498 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.533515 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.533526 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.636780 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.636828 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.636840 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.636861 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.636874 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.740517 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.740584 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.740597 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.740617 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.740633 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.844002 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.844072 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.844085 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.844304 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.844321 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.947810 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.947863 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.947877 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.947955 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[5024]: I1128 17:00:11.947968 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.050999 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.051055 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.051064 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.051121 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.051130 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.154274 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.154706 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.154728 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.154759 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.154777 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.257261 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.257324 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.257333 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.257349 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.257359 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.360112 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.360498 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.360667 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.360744 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.360817 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.463842 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.463898 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.463908 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.463931 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.463943 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.498059 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:12 crc kubenswrapper[5024]: E1128 17:00:12.498251 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.567629 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.567686 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.567696 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.567714 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.567729 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.627883 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.627947 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.627963 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.627984 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.627995 5024 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.680905 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9"] Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.681546 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.685804 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.686427 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.686339 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.690060 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.757098 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9795af2e-a8bf-4986-a746-0eb769d81d5d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.757160 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9795af2e-a8bf-4986-a746-0eb769d81d5d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.757190 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9795af2e-a8bf-4986-a746-0eb769d81d5d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.757228 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9795af2e-a8bf-4986-a746-0eb769d81d5d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.757442 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9795af2e-a8bf-4986-a746-0eb769d81d5d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.858779 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9795af2e-a8bf-4986-a746-0eb769d81d5d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.858835 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9795af2e-a8bf-4986-a746-0eb769d81d5d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.858860 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9795af2e-a8bf-4986-a746-0eb769d81d5d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.858894 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9795af2e-a8bf-4986-a746-0eb769d81d5d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.858925 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9795af2e-a8bf-4986-a746-0eb769d81d5d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.858989 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9795af2e-a8bf-4986-a746-0eb769d81d5d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.859114 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9795af2e-a8bf-4986-a746-0eb769d81d5d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.859670 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9795af2e-a8bf-4986-a746-0eb769d81d5d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.864647 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9795af2e-a8bf-4986-a746-0eb769d81d5d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:12 crc kubenswrapper[5024]: I1128 17:00:12.878226 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9795af2e-a8bf-4986-a746-0eb769d81d5d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xd6d9\" (UID: \"9795af2e-a8bf-4986-a746-0eb769d81d5d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:13 crc kubenswrapper[5024]: I1128 17:00:13.001467 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" Nov 28 17:00:13 crc kubenswrapper[5024]: I1128 17:00:13.431849 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" event={"ID":"9795af2e-a8bf-4986-a746-0eb769d81d5d","Type":"ContainerStarted","Data":"ec6ef023cda9e894bd7030a871a9916b82c09a8504d0d376562fb74ed89e413b"} Nov 28 17:00:13 crc kubenswrapper[5024]: I1128 17:00:13.431936 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" event={"ID":"9795af2e-a8bf-4986-a746-0eb769d81d5d","Type":"ContainerStarted","Data":"badb106bfa2416a3fcc2677ce52d7f3cbcb3b38119279f63469caa3ba8473eba"} Nov 28 17:00:13 crc kubenswrapper[5024]: I1128 17:00:13.478723 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xd6d9" podStartSLOduration=94.478704707 podStartE2EDuration="1m34.478704707s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:13.477901843 +0000 UTC m=+115.526822758" watchObservedRunningTime="2025-11-28 17:00:13.478704707 +0000 UTC m=+115.527625612" Nov 28 17:00:13 crc kubenswrapper[5024]: I1128 17:00:13.496962 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:13 crc kubenswrapper[5024]: I1128 17:00:13.497010 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:13 crc kubenswrapper[5024]: I1128 17:00:13.497070 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:13 crc kubenswrapper[5024]: E1128 17:00:13.497147 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:13 crc kubenswrapper[5024]: E1128 17:00:13.497253 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:13 crc kubenswrapper[5024]: E1128 17:00:13.497305 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:14 crc kubenswrapper[5024]: I1128 17:00:14.498093 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:14 crc kubenswrapper[5024]: E1128 17:00:14.498862 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:15 crc kubenswrapper[5024]: I1128 17:00:15.497674 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:15 crc kubenswrapper[5024]: I1128 17:00:15.497817 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:15 crc kubenswrapper[5024]: I1128 17:00:15.497674 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:15 crc kubenswrapper[5024]: E1128 17:00:15.497914 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:15 crc kubenswrapper[5024]: E1128 17:00:15.498140 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:15 crc kubenswrapper[5024]: E1128 17:00:15.498375 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:16 crc kubenswrapper[5024]: I1128 17:00:16.497413 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:16 crc kubenswrapper[5024]: E1128 17:00:16.497625 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:17 crc kubenswrapper[5024]: I1128 17:00:17.497479 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:17 crc kubenswrapper[5024]: I1128 17:00:17.497536 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:17 crc kubenswrapper[5024]: I1128 17:00:17.497640 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:17 crc kubenswrapper[5024]: E1128 17:00:17.497791 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:17 crc kubenswrapper[5024]: E1128 17:00:17.497942 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:17 crc kubenswrapper[5024]: E1128 17:00:17.498125 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:18 crc kubenswrapper[5024]: I1128 17:00:18.451201 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4vh86_97cac632-c692-414d-b0cf-605f0bb7719b/kube-multus/1.log" Nov 28 17:00:18 crc kubenswrapper[5024]: I1128 17:00:18.451970 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4vh86_97cac632-c692-414d-b0cf-605f0bb7719b/kube-multus/0.log" Nov 28 17:00:18 crc kubenswrapper[5024]: I1128 17:00:18.452034 5024 generic.go:334] "Generic (PLEG): container finished" podID="97cac632-c692-414d-b0cf-605f0bb7719b" containerID="fddcf1223db1eb698e609489771d1fd1fd040bb9f4df3b4d69e38e8f827ee2b6" exitCode=1 Nov 28 17:00:18 crc kubenswrapper[5024]: I1128 17:00:18.452074 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4vh86" event={"ID":"97cac632-c692-414d-b0cf-605f0bb7719b","Type":"ContainerDied","Data":"fddcf1223db1eb698e609489771d1fd1fd040bb9f4df3b4d69e38e8f827ee2b6"} Nov 28 17:00:18 crc kubenswrapper[5024]: I1128 17:00:18.452156 5024 scope.go:117] "RemoveContainer" containerID="a47157622658f93b20863a8d39b8409f9cf61bce1491b84ca241f4806820f216" Nov 28 17:00:18 crc kubenswrapper[5024]: I1128 17:00:18.452691 5024 scope.go:117] "RemoveContainer" containerID="fddcf1223db1eb698e609489771d1fd1fd040bb9f4df3b4d69e38e8f827ee2b6" Nov 28 17:00:18 crc kubenswrapper[5024]: E1128 17:00:18.452895 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-4vh86_openshift-multus(97cac632-c692-414d-b0cf-605f0bb7719b)\"" pod="openshift-multus/multus-4vh86" podUID="97cac632-c692-414d-b0cf-605f0bb7719b" Nov 28 17:00:18 crc kubenswrapper[5024]: E1128 17:00:18.467659 5024 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 28 17:00:18 crc kubenswrapper[5024]: I1128 17:00:18.497344 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:18 crc kubenswrapper[5024]: E1128 17:00:18.498801 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:18 crc kubenswrapper[5024]: E1128 17:00:18.696246 5024 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:00:19 crc kubenswrapper[5024]: I1128 17:00:19.458803 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4vh86_97cac632-c692-414d-b0cf-605f0bb7719b/kube-multus/1.log" Nov 28 17:00:19 crc kubenswrapper[5024]: I1128 17:00:19.497379 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:19 crc kubenswrapper[5024]: I1128 17:00:19.497536 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:19 crc kubenswrapper[5024]: E1128 17:00:19.497603 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:19 crc kubenswrapper[5024]: E1128 17:00:19.497724 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:19 crc kubenswrapper[5024]: I1128 17:00:19.497411 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:19 crc kubenswrapper[5024]: E1128 17:00:19.497842 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:20 crc kubenswrapper[5024]: I1128 17:00:20.497862 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:20 crc kubenswrapper[5024]: E1128 17:00:20.498038 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:21 crc kubenswrapper[5024]: I1128 17:00:21.497442 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:21 crc kubenswrapper[5024]: I1128 17:00:21.497543 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:21 crc kubenswrapper[5024]: I1128 17:00:21.497460 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:21 crc kubenswrapper[5024]: E1128 17:00:21.497619 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:21 crc kubenswrapper[5024]: E1128 17:00:21.498208 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:21 crc kubenswrapper[5024]: E1128 17:00:21.498105 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:22 crc kubenswrapper[5024]: I1128 17:00:22.497755 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:22 crc kubenswrapper[5024]: E1128 17:00:22.497959 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:23 crc kubenswrapper[5024]: I1128 17:00:23.497052 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:23 crc kubenswrapper[5024]: I1128 17:00:23.497156 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:23 crc kubenswrapper[5024]: E1128 17:00:23.497216 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:23 crc kubenswrapper[5024]: E1128 17:00:23.497404 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:23 crc kubenswrapper[5024]: I1128 17:00:23.497691 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:23 crc kubenswrapper[5024]: E1128 17:00:23.497803 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:23 crc kubenswrapper[5024]: E1128 17:00:23.698873 5024 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:00:24 crc kubenswrapper[5024]: I1128 17:00:24.497374 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:24 crc kubenswrapper[5024]: E1128 17:00:24.497562 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:25 crc kubenswrapper[5024]: I1128 17:00:25.497474 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:25 crc kubenswrapper[5024]: I1128 17:00:25.497538 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:25 crc kubenswrapper[5024]: I1128 17:00:25.497597 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:25 crc kubenswrapper[5024]: E1128 17:00:25.497683 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:25 crc kubenswrapper[5024]: E1128 17:00:25.497757 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:25 crc kubenswrapper[5024]: E1128 17:00:25.497831 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:26 crc kubenswrapper[5024]: I1128 17:00:26.512563 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:26 crc kubenswrapper[5024]: E1128 17:00:26.512741 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:26 crc kubenswrapper[5024]: I1128 17:00:26.513689 5024 scope.go:117] "RemoveContainer" containerID="3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd" Nov 28 17:00:26 crc kubenswrapper[5024]: E1128 17:00:26.513882 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-b2gbm_openshift-ovn-kubernetes(5b1542ec-e582-404b-8649-4a2a3e6ac398)\"" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" Nov 28 17:00:27 crc kubenswrapper[5024]: I1128 17:00:27.497040 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:27 crc kubenswrapper[5024]: I1128 17:00:27.497042 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:27 crc kubenswrapper[5024]: E1128 17:00:27.497206 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:27 crc kubenswrapper[5024]: I1128 17:00:27.497070 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:27 crc kubenswrapper[5024]: E1128 17:00:27.497384 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:27 crc kubenswrapper[5024]: E1128 17:00:27.497428 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:28 crc kubenswrapper[5024]: I1128 17:00:28.497432 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:28 crc kubenswrapper[5024]: E1128 17:00:28.499471 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:28 crc kubenswrapper[5024]: E1128 17:00:28.700638 5024 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:00:29 crc kubenswrapper[5024]: I1128 17:00:29.497541 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:29 crc kubenswrapper[5024]: I1128 17:00:29.497541 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:29 crc kubenswrapper[5024]: E1128 17:00:29.497733 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:29 crc kubenswrapper[5024]: I1128 17:00:29.497576 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:29 crc kubenswrapper[5024]: E1128 17:00:29.497850 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:29 crc kubenswrapper[5024]: E1128 17:00:29.497894 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:30 crc kubenswrapper[5024]: I1128 17:00:30.497631 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:30 crc kubenswrapper[5024]: E1128 17:00:30.498157 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:30 crc kubenswrapper[5024]: I1128 17:00:30.498598 5024 scope.go:117] "RemoveContainer" containerID="fddcf1223db1eb698e609489771d1fd1fd040bb9f4df3b4d69e38e8f827ee2b6" Nov 28 17:00:31 crc kubenswrapper[5024]: I1128 17:00:31.497364 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:31 crc kubenswrapper[5024]: E1128 17:00:31.498257 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:31 crc kubenswrapper[5024]: I1128 17:00:31.497544 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:31 crc kubenswrapper[5024]: E1128 17:00:31.498993 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:31 crc kubenswrapper[5024]: I1128 17:00:31.497480 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:31 crc kubenswrapper[5024]: E1128 17:00:31.499274 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:31 crc kubenswrapper[5024]: I1128 17:00:31.533545 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4vh86_97cac632-c692-414d-b0cf-605f0bb7719b/kube-multus/1.log" Nov 28 17:00:31 crc kubenswrapper[5024]: I1128 17:00:31.533603 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4vh86" event={"ID":"97cac632-c692-414d-b0cf-605f0bb7719b","Type":"ContainerStarted","Data":"3a37dfec474ed39a219775a09f2e6b802a00e45a060e671c988f1e68293d49df"} Nov 28 17:00:32 crc kubenswrapper[5024]: I1128 17:00:32.497209 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:32 crc kubenswrapper[5024]: E1128 17:00:32.497384 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:33 crc kubenswrapper[5024]: I1128 17:00:33.497735 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:33 crc kubenswrapper[5024]: I1128 17:00:33.497798 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:33 crc kubenswrapper[5024]: I1128 17:00:33.497798 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:33 crc kubenswrapper[5024]: E1128 17:00:33.497921 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:33 crc kubenswrapper[5024]: E1128 17:00:33.498103 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:33 crc kubenswrapper[5024]: E1128 17:00:33.498188 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:33 crc kubenswrapper[5024]: E1128 17:00:33.702137 5024 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:00:34 crc kubenswrapper[5024]: I1128 17:00:34.497688 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:34 crc kubenswrapper[5024]: E1128 17:00:34.497892 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:35 crc kubenswrapper[5024]: I1128 17:00:35.496842 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:35 crc kubenswrapper[5024]: I1128 17:00:35.496842 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:35 crc kubenswrapper[5024]: E1128 17:00:35.496966 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:35 crc kubenswrapper[5024]: E1128 17:00:35.497050 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:35 crc kubenswrapper[5024]: I1128 17:00:35.496844 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:35 crc kubenswrapper[5024]: E1128 17:00:35.497122 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:36 crc kubenswrapper[5024]: I1128 17:00:36.497258 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:36 crc kubenswrapper[5024]: E1128 17:00:36.497437 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:37 crc kubenswrapper[5024]: I1128 17:00:37.497326 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:37 crc kubenswrapper[5024]: I1128 17:00:37.497424 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:37 crc kubenswrapper[5024]: I1128 17:00:37.497480 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:37 crc kubenswrapper[5024]: E1128 17:00:37.497558 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:37 crc kubenswrapper[5024]: E1128 17:00:37.497683 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:37 crc kubenswrapper[5024]: E1128 17:00:37.497832 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:38 crc kubenswrapper[5024]: I1128 17:00:38.497155 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:38 crc kubenswrapper[5024]: E1128 17:00:38.498198 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:38 crc kubenswrapper[5024]: E1128 17:00:38.704133 5024 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:00:39 crc kubenswrapper[5024]: I1128 17:00:39.496927 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:39 crc kubenswrapper[5024]: E1128 17:00:39.497103 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:39 crc kubenswrapper[5024]: I1128 17:00:39.497793 5024 scope.go:117] "RemoveContainer" containerID="3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd" Nov 28 17:00:39 crc kubenswrapper[5024]: I1128 17:00:39.498223 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:39 crc kubenswrapper[5024]: E1128 17:00:39.498293 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:39 crc kubenswrapper[5024]: I1128 17:00:39.498351 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:39 crc kubenswrapper[5024]: E1128 17:00:39.498403 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:40 crc kubenswrapper[5024]: I1128 17:00:40.497662 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:40 crc kubenswrapper[5024]: E1128 17:00:40.498094 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:40 crc kubenswrapper[5024]: I1128 17:00:40.563689 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/3.log" Nov 28 17:00:40 crc kubenswrapper[5024]: I1128 17:00:40.566190 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerStarted","Data":"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd"} Nov 28 17:00:40 crc kubenswrapper[5024]: I1128 17:00:40.566637 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 17:00:40 crc kubenswrapper[5024]: I1128 17:00:40.598394 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podStartSLOduration=121.598373388 podStartE2EDuration="2m1.598373388s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:40.597256357 +0000 UTC m=+142.646177262" watchObservedRunningTime="2025-11-28 17:00:40.598373388 +0000 UTC m=+142.647294293" Nov 28 17:00:40 crc kubenswrapper[5024]: I1128 17:00:40.916372 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5t4kc"] Nov 28 17:00:40 crc kubenswrapper[5024]: I1128 17:00:40.916522 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:40 crc kubenswrapper[5024]: E1128 17:00:40.916673 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:41 crc kubenswrapper[5024]: I1128 17:00:41.497067 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:41 crc kubenswrapper[5024]: I1128 17:00:41.497109 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:41 crc kubenswrapper[5024]: I1128 17:00:41.497146 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:41 crc kubenswrapper[5024]: E1128 17:00:41.497233 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:41 crc kubenswrapper[5024]: E1128 17:00:41.497321 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:41 crc kubenswrapper[5024]: E1128 17:00:41.497469 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:42 crc kubenswrapper[5024]: I1128 17:00:42.497889 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:42 crc kubenswrapper[5024]: E1128 17:00:42.498047 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5t4kc" podUID="949e234b-60b0-40e4-a423-0596dafd56c1" Nov 28 17:00:43 crc kubenswrapper[5024]: I1128 17:00:43.497743 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:43 crc kubenswrapper[5024]: I1128 17:00:43.497837 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:43 crc kubenswrapper[5024]: E1128 17:00:43.497917 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:43 crc kubenswrapper[5024]: E1128 17:00:43.498035 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:43 crc kubenswrapper[5024]: I1128 17:00:43.498549 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:43 crc kubenswrapper[5024]: E1128 17:00:43.498710 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:44 crc kubenswrapper[5024]: I1128 17:00:44.497519 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:00:44 crc kubenswrapper[5024]: I1128 17:00:44.500741 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 17:00:44 crc kubenswrapper[5024]: I1128 17:00:44.501759 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 17:00:45 crc kubenswrapper[5024]: I1128 17:00:45.496970 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:45 crc kubenswrapper[5024]: I1128 17:00:45.497004 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:45 crc kubenswrapper[5024]: I1128 17:00:45.497086 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:45 crc kubenswrapper[5024]: I1128 17:00:45.500824 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 17:00:45 crc kubenswrapper[5024]: I1128 17:00:45.501982 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 17:00:45 crc kubenswrapper[5024]: I1128 17:00:45.501559 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 17:00:45 crc kubenswrapper[5024]: I1128 17:00:45.502090 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 17:00:47 crc kubenswrapper[5024]: I1128 17:00:47.638461 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:47 crc kubenswrapper[5024]: E1128 17:00:47.638627 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:02:49.638593186 +0000 UTC m=+271.687514091 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:47 crc kubenswrapper[5024]: I1128 17:00:47.739346 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:47 crc kubenswrapper[5024]: I1128 17:00:47.739389 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:47 crc kubenswrapper[5024]: I1128 17:00:47.745089 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:47 crc kubenswrapper[5024]: I1128 17:00:47.840646 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:47 crc kubenswrapper[5024]: I1128 17:00:47.840732 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:47 crc kubenswrapper[5024]: I1128 17:00:47.844596 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:47 crc kubenswrapper[5024]: I1128 17:00:47.845660 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:47 crc kubenswrapper[5024]: I1128 17:00:47.914083 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:47 crc kubenswrapper[5024]: I1128 17:00:47.930254 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:48 crc kubenswrapper[5024]: I1128 17:00:48.256787 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:48 crc kubenswrapper[5024]: I1128 17:00:48.522428 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:48 crc kubenswrapper[5024]: W1128 17:00:48.594436 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-1d6c4cd06494178dcc007fa6f41a18940cea64ad8f364020ad69155b15e852d9 WatchSource:0}: Error finding container 1d6c4cd06494178dcc007fa6f41a18940cea64ad8f364020ad69155b15e852d9: Status 404 returned error can't find the container with id 1d6c4cd06494178dcc007fa6f41a18940cea64ad8f364020ad69155b15e852d9 Nov 28 17:00:48 crc kubenswrapper[5024]: I1128 17:00:48.597427 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"030f18ca1d3c2f9a7c9fad983dbd7b48e38062d9de12df7fff102c2ac3492e13"} Nov 28 17:00:48 crc kubenswrapper[5024]: W1128 17:00:48.716527 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-3bef15a553f78bc01e09c247b64ba98b2f9f25fd1babafe06f118128a9307f1a WatchSource:0}: Error finding container 3bef15a553f78bc01e09c247b64ba98b2f9f25fd1babafe06f118128a9307f1a: Status 404 returned error can't find the container with id 3bef15a553f78bc01e09c247b64ba98b2f9f25fd1babafe06f118128a9307f1a Nov 28 17:00:49 crc kubenswrapper[5024]: I1128 17:00:49.603743 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3d0807957f0e5d903050890455b2f6cf13ae4ed42c814c85061f782c33e740e1"} Nov 28 17:00:49 crc kubenswrapper[5024]: I1128 17:00:49.604931 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:49 crc kubenswrapper[5024]: I1128 17:00:49.607161 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"83ee655f2ce23a7b5dcdb60eff07d5061fbfa14392d54a740dd2738ec952b3e0"} Nov 28 17:00:49 crc kubenswrapper[5024]: I1128 17:00:49.607244 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3bef15a553f78bc01e09c247b64ba98b2f9f25fd1babafe06f118128a9307f1a"} Nov 28 17:00:49 crc kubenswrapper[5024]: I1128 17:00:49.609592 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"129c0b56bc7e41bd9aa1a581890da9aaeb8af878e465cebaf4f89570ef38470f"} Nov 28 17:00:49 crc kubenswrapper[5024]: I1128 17:00:49.609616 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1d6c4cd06494178dcc007fa6f41a18940cea64ad8f364020ad69155b15e852d9"} Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.652486 5024 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.704725 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6jk4g"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.705407 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.705686 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.706484 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.707425 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vk6x4"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.707986 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.719219 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2dsw"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.719686 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.724632 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.724964 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.725127 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.725228 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.725529 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.725659 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.726030 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.726146 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.726279 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.726719 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.727110 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.727212 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.727297 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.727312 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.727682 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jvvpl"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.727951 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jvvpl" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.727323 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.731053 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.742865 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.742942 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.743236 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.743288 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.745180 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.754416 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.754606 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.754687 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.754766 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.755594 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.755684 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.756179 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.756524 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.756741 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-r7n7g"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.757316 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.757453 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.757539 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.757860 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.758031 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.758542 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.758042 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.758099 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.758147 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.758144 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.758189 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.758206 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.758294 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.759054 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.759084 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.759150 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vj7pt"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.759450 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.759848 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.759853 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.759973 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.760436 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.760838 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l4dfg"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.760880 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.761172 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.764841 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.765695 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.765836 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.765877 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.765911 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.765941 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.765968 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.765972 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.765991 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.766038 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.766873 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.767536 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-b2t9m"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.768261 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.768283 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.771734 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.771904 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.772263 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.772373 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.772472 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.772668 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.772789 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.772963 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.773055 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.773169 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.773275 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.773554 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.781305 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.782514 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.783226 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jhtl"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.783789 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.783946 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n4vqb"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.784525 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.784712 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.789941 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.790378 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.794868 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.794911 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795146 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795182 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795316 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.819674 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795328 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.820238 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795352 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795361 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795392 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795422 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795454 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795481 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795508 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795560 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795577 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795152 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795639 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795645 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.828576 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.828666 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f84f4343-2000-4b50-9650-22953ca7d39d-console-serving-cert\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.828773 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-image-import-ca\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.828885 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/96e29661-be19-4efb-8337-661e5af2c4a2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.828914 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1be805d-70ab-4dfa-aa6f-23b846d64124-images\") pod \"machine-api-operator-5694c8668f-vk6x4\" (UID: \"c1be805d-70ab-4dfa-aa6f-23b846d64124\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.828997 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/96e29661-be19-4efb-8337-661e5af2c4a2-audit-dir\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.829038 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-oauth-serving-cert\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795683 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.829105 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-etcd-serving-ca\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.829192 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed40ac73-afc2-4dae-9364-e6775923e031-node-pullsecrets\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.829216 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a038e211-ffae-4e8b-9abf-8b32153b2c6d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-frbqs\" (UID: \"a038e211-ffae-4e8b-9abf-8b32153b2c6d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.829321 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-config\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.829349 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e29661-be19-4efb-8337-661e5af2c4a2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.829478 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-console-config\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.795726 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.829586 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed40ac73-afc2-4dae-9364-e6775923e031-serving-cert\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.829844 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a038e211-ffae-4e8b-9abf-8b32153b2c6d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-frbqs\" (UID: \"a038e211-ffae-4e8b-9abf-8b32153b2c6d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.829968 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdjd4\" (UniqueName: \"kubernetes.io/projected/d4cd69fe-add0-427e-a129-cfb9cecb6887-kube-api-access-fdjd4\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830061 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-audit\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830083 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4cd69fe-add0-427e-a129-cfb9cecb6887-serving-cert\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830178 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ed40ac73-afc2-4dae-9364-e6775923e031-encryption-config\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830265 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1be805d-70ab-4dfa-aa6f-23b846d64124-config\") pod \"machine-api-operator-5694c8668f-vk6x4\" (UID: \"c1be805d-70ab-4dfa-aa6f-23b846d64124\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830354 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/96e29661-be19-4efb-8337-661e5af2c4a2-etcd-client\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830378 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a809b012-e8e1-4061-8fcf-7c9083e5569d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-v9pk8\" (UID: \"a809b012-e8e1-4061-8fcf-7c9083e5569d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830468 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rmgz\" (UniqueName: \"kubernetes.io/projected/fb6a1824-13a4-427f-b277-c41045a8ad45-kube-api-access-9rmgz\") pod \"downloads-7954f5f757-jvvpl\" (UID: \"fb6a1824-13a4-427f-b277-c41045a8ad45\") " pod="openshift-console/downloads-7954f5f757-jvvpl" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830489 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-config\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830582 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830637 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kg5d\" (UniqueName: \"kubernetes.io/projected/ed40ac73-afc2-4dae-9364-e6775923e031-kube-api-access-9kg5d\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830696 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f84f4343-2000-4b50-9650-22953ca7d39d-console-oauth-config\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830722 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ed40ac73-afc2-4dae-9364-e6775923e031-etcd-client\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830841 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed40ac73-afc2-4dae-9364-e6775923e031-audit-dir\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830889 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-service-ca\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830905 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/96e29661-be19-4efb-8337-661e5af2c4a2-audit-policies\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830939 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-548v7\" (UniqueName: \"kubernetes.io/projected/f84f4343-2000-4b50-9650-22953ca7d39d-kube-api-access-548v7\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.830961 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fms2\" (UniqueName: \"kubernetes.io/projected/a809b012-e8e1-4061-8fcf-7c9083e5569d-kube-api-access-8fms2\") pod \"openshift-config-operator-7777fb866f-v9pk8\" (UID: \"a809b012-e8e1-4061-8fcf-7c9083e5569d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.831123 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlvqg\" (UniqueName: \"kubernetes.io/projected/c1be805d-70ab-4dfa-aa6f-23b846d64124-kube-api-access-qlvqg\") pod \"machine-api-operator-5694c8668f-vk6x4\" (UID: \"c1be805d-70ab-4dfa-aa6f-23b846d64124\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.831156 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/96e29661-be19-4efb-8337-661e5af2c4a2-encryption-config\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.831178 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v4w7\" (UniqueName: \"kubernetes.io/projected/a038e211-ffae-4e8b-9abf-8b32153b2c6d-kube-api-access-5v4w7\") pod \"openshift-apiserver-operator-796bbdcf4f-frbqs\" (UID: \"a038e211-ffae-4e8b-9abf-8b32153b2c6d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.831228 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.831283 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1be805d-70ab-4dfa-aa6f-23b846d64124-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vk6x4\" (UID: \"c1be805d-70ab-4dfa-aa6f-23b846d64124\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.831312 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfsbq\" (UniqueName: \"kubernetes.io/projected/96e29661-be19-4efb-8337-661e5af2c4a2-kube-api-access-rfsbq\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.831356 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-client-ca\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.831378 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-trusted-ca-bundle\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.831421 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a809b012-e8e1-4061-8fcf-7c9083e5569d-serving-cert\") pod \"openshift-config-operator-7777fb866f-v9pk8\" (UID: \"a809b012-e8e1-4061-8fcf-7c9083e5569d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.831440 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96e29661-be19-4efb-8337-661e5af2c4a2-serving-cert\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.831935 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.831954 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.835352 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.856733 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.856985 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.857273 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.857405 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.857839 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.857888 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.858504 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.857955 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.858959 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.859205 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.859510 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.859539 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.863093 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.863688 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.864378 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.865032 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.867773 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j485j"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.868462 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.870649 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.871237 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.872606 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4pxb8"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.872850 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.873966 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.873977 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.874415 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4bww5"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.874816 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.874926 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.875461 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4pxb8" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.876010 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6p4ff"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.876134 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4bww5" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.877183 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.877368 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.877795 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kkcnh"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.881442 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.881729 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.883356 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x56ns"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.883984 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.884041 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.884415 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cw8g"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.885313 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x56ns" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.886695 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.887143 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.889336 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.889754 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dqkhr"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.890185 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dqkhr" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.890859 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cw8g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.891010 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.891197 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.891538 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.893264 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.908603 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.908740 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.919905 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.924284 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.928749 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.930430 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.930663 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933001 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933202 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933627 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed40ac73-afc2-4dae-9364-e6775923e031-serving-cert\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933661 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a038e211-ffae-4e8b-9abf-8b32153b2c6d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-frbqs\" (UID: \"a038e211-ffae-4e8b-9abf-8b32153b2c6d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933692 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-audit\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933710 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4cd69fe-add0-427e-a129-cfb9cecb6887-serving-cert\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933726 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdjd4\" (UniqueName: \"kubernetes.io/projected/d4cd69fe-add0-427e-a129-cfb9cecb6887-kube-api-access-fdjd4\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933744 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ed40ac73-afc2-4dae-9364-e6775923e031-encryption-config\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933759 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1be805d-70ab-4dfa-aa6f-23b846d64124-config\") pod \"machine-api-operator-5694c8668f-vk6x4\" (UID: \"c1be805d-70ab-4dfa-aa6f-23b846d64124\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933779 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a809b012-e8e1-4061-8fcf-7c9083e5569d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-v9pk8\" (UID: \"a809b012-e8e1-4061-8fcf-7c9083e5569d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933796 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/96e29661-be19-4efb-8337-661e5af2c4a2-etcd-client\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933813 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rmgz\" (UniqueName: \"kubernetes.io/projected/fb6a1824-13a4-427f-b277-c41045a8ad45-kube-api-access-9rmgz\") pod \"downloads-7954f5f757-jvvpl\" (UID: \"fb6a1824-13a4-427f-b277-c41045a8ad45\") " pod="openshift-console/downloads-7954f5f757-jvvpl" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933831 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-config\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933846 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933859 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kg5d\" (UniqueName: \"kubernetes.io/projected/ed40ac73-afc2-4dae-9364-e6775923e031-kube-api-access-9kg5d\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933879 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f84f4343-2000-4b50-9650-22953ca7d39d-console-oauth-config\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933906 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ed40ac73-afc2-4dae-9364-e6775923e031-etcd-client\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933922 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed40ac73-afc2-4dae-9364-e6775923e031-audit-dir\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933941 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-service-ca\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933964 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fms2\" (UniqueName: \"kubernetes.io/projected/a809b012-e8e1-4061-8fcf-7c9083e5569d-kube-api-access-8fms2\") pod \"openshift-config-operator-7777fb866f-v9pk8\" (UID: \"a809b012-e8e1-4061-8fcf-7c9083e5569d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933979 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/96e29661-be19-4efb-8337-661e5af2c4a2-audit-policies\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.933996 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-548v7\" (UniqueName: \"kubernetes.io/projected/f84f4343-2000-4b50-9650-22953ca7d39d-kube-api-access-548v7\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934028 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlvqg\" (UniqueName: \"kubernetes.io/projected/c1be805d-70ab-4dfa-aa6f-23b846d64124-kube-api-access-qlvqg\") pod \"machine-api-operator-5694c8668f-vk6x4\" (UID: \"c1be805d-70ab-4dfa-aa6f-23b846d64124\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934049 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/96e29661-be19-4efb-8337-661e5af2c4a2-encryption-config\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934065 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v4w7\" (UniqueName: \"kubernetes.io/projected/a038e211-ffae-4e8b-9abf-8b32153b2c6d-kube-api-access-5v4w7\") pod \"openshift-apiserver-operator-796bbdcf4f-frbqs\" (UID: \"a038e211-ffae-4e8b-9abf-8b32153b2c6d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934083 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934104 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1be805d-70ab-4dfa-aa6f-23b846d64124-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vk6x4\" (UID: \"c1be805d-70ab-4dfa-aa6f-23b846d64124\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934122 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfsbq\" (UniqueName: \"kubernetes.io/projected/96e29661-be19-4efb-8337-661e5af2c4a2-kube-api-access-rfsbq\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934139 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-client-ca\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934159 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a809b012-e8e1-4061-8fcf-7c9083e5569d-serving-cert\") pod \"openshift-config-operator-7777fb866f-v9pk8\" (UID: \"a809b012-e8e1-4061-8fcf-7c9083e5569d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934177 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96e29661-be19-4efb-8337-661e5af2c4a2-serving-cert\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934194 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-trusted-ca-bundle\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934213 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f84f4343-2000-4b50-9650-22953ca7d39d-console-serving-cert\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934231 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-image-import-ca\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934248 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/96e29661-be19-4efb-8337-661e5af2c4a2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934263 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1be805d-70ab-4dfa-aa6f-23b846d64124-images\") pod \"machine-api-operator-5694c8668f-vk6x4\" (UID: \"c1be805d-70ab-4dfa-aa6f-23b846d64124\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934278 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/96e29661-be19-4efb-8337-661e5af2c4a2-audit-dir\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934296 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-oauth-serving-cert\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934313 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-etcd-serving-ca\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934333 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed40ac73-afc2-4dae-9364-e6775923e031-node-pullsecrets\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934348 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a038e211-ffae-4e8b-9abf-8b32153b2c6d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-frbqs\" (UID: \"a038e211-ffae-4e8b-9abf-8b32153b2c6d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934363 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-config\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934379 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e29661-be19-4efb-8337-661e5af2c4a2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.934393 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-console-config\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.935273 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-console-config\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.935281 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.935850 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-client-ca\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.936335 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1be805d-70ab-4dfa-aa6f-23b846d64124-config\") pod \"machine-api-operator-5694c8668f-vk6x4\" (UID: \"c1be805d-70ab-4dfa-aa6f-23b846d64124\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.936615 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-audit\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.936899 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/96e29661-be19-4efb-8337-661e5af2c4a2-audit-policies\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.937365 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-config\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.939067 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/96e29661-be19-4efb-8337-661e5af2c4a2-audit-dir\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.939242 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a809b012-e8e1-4061-8fcf-7c9083e5569d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-v9pk8\" (UID: \"a809b012-e8e1-4061-8fcf-7c9083e5569d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.939401 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed40ac73-afc2-4dae-9364-e6775923e031-audit-dir\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.939428 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed40ac73-afc2-4dae-9364-e6775923e031-node-pullsecrets\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.940622 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-image-import-ca\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.942434 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e29661-be19-4efb-8337-661e5af2c4a2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.943429 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/96e29661-be19-4efb-8337-661e5af2c4a2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.944321 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1be805d-70ab-4dfa-aa6f-23b846d64124-images\") pod \"machine-api-operator-5694c8668f-vk6x4\" (UID: \"c1be805d-70ab-4dfa-aa6f-23b846d64124\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.944542 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-config\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.944622 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-oauth-serving-cert\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.945733 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ed40ac73-afc2-4dae-9364-e6775923e031-etcd-serving-ca\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.945911 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-service-ca\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.946268 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-trusted-ca-bundle\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.948982 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.949426 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f84f4343-2000-4b50-9650-22953ca7d39d-console-serving-cert\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.949797 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.949931 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ed40ac73-afc2-4dae-9364-e6775923e031-etcd-client\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.950590 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ed40ac73-afc2-4dae-9364-e6775923e031-encryption-config\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.951009 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a809b012-e8e1-4061-8fcf-7c9083e5569d-serving-cert\") pod \"openshift-config-operator-7777fb866f-v9pk8\" (UID: \"a809b012-e8e1-4061-8fcf-7c9083e5569d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.951112 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4cd69fe-add0-427e-a129-cfb9cecb6887-serving-cert\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.952142 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f84f4343-2000-4b50-9650-22953ca7d39d-console-oauth-config\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.953702 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a038e211-ffae-4e8b-9abf-8b32153b2c6d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-frbqs\" (UID: \"a038e211-ffae-4e8b-9abf-8b32153b2c6d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.953961 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a038e211-ffae-4e8b-9abf-8b32153b2c6d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-frbqs\" (UID: \"a038e211-ffae-4e8b-9abf-8b32153b2c6d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.954586 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1be805d-70ab-4dfa-aa6f-23b846d64124-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vk6x4\" (UID: \"c1be805d-70ab-4dfa-aa6f-23b846d64124\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.955186 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed40ac73-afc2-4dae-9364-e6775923e031-serving-cert\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.958054 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/96e29661-be19-4efb-8337-661e5af2c4a2-encryption-config\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.958082 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96e29661-be19-4efb-8337-661e5af2c4a2-serving-cert\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.958459 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/96e29661-be19-4efb-8337-661e5af2c4a2-etcd-client\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.960498 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6jk4g"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.964931 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.966318 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.968439 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.969596 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.970761 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.974752 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vk6x4"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.978557 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-r7n7g"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.979106 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.980861 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j485j"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.982155 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.983172 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-msz56"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.985846 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2dsw"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.985876 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jvvpl"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.986152 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.986363 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vj7pt"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.988306 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.990284 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4pxb8"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.991863 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.992548 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jhtl"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.994965 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.996013 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cw8g"] Nov 28 17:00:53 crc kubenswrapper[5024]: I1128 17:00:53.997231 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9jlxs"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:53.999999 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.000048 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.000318 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9jlxs" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.000726 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.001483 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.002993 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4bww5"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.004161 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l4dfg"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.005270 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.006579 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n4vqb"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.007697 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.007897 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.009243 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x56ns"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.010358 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9jlxs"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.011323 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.013043 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kkcnh"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.013740 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.015169 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.016427 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.017982 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6p4ff"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.020116 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dqkhr"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.021624 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-msz56"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.023311 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.025755 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.027406 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-zgtq6"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.027964 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.028965 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zgtq6" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.029614 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-t5nkh"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.032499 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-t5nkh" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.033976 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zgtq6"] Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.047981 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.067860 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.088346 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.107932 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.147753 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.167617 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.188722 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.207936 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.248946 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.268315 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.287730 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.307542 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.328071 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.347782 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.367498 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.388689 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.407813 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.428206 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.448117 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.467805 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.487960 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.507865 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.528233 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.547889 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.567765 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.588844 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.608535 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.627636 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.655198 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.667407 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.687340 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.708752 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.728359 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.747747 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.775625 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.788453 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.808233 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.828206 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.848190 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.868053 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.886638 5024 request.go:700] Waited for 1.001611301s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.888617 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.908163 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.927913 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.947882 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.968602 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 17:00:54 crc kubenswrapper[5024]: I1128 17:00:54.988206 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.008183 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.034133 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.047874 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.068253 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.088542 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.107660 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.128153 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.148558 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.167056 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.188355 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.207857 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.227682 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.248311 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.267833 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.288498 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.308295 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.328248 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.347726 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.367867 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.388070 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.421853 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-548v7\" (UniqueName: \"kubernetes.io/projected/f84f4343-2000-4b50-9650-22953ca7d39d-kube-api-access-548v7\") pod \"console-f9d7485db-r7n7g\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.447642 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlvqg\" (UniqueName: \"kubernetes.io/projected/c1be805d-70ab-4dfa-aa6f-23b846d64124-kube-api-access-qlvqg\") pod \"machine-api-operator-5694c8668f-vk6x4\" (UID: \"c1be805d-70ab-4dfa-aa6f-23b846d64124\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.478135 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rmgz\" (UniqueName: \"kubernetes.io/projected/fb6a1824-13a4-427f-b277-c41045a8ad45-kube-api-access-9rmgz\") pod \"downloads-7954f5f757-jvvpl\" (UID: \"fb6a1824-13a4-427f-b277-c41045a8ad45\") " pod="openshift-console/downloads-7954f5f757-jvvpl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.488748 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.491809 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kg5d\" (UniqueName: \"kubernetes.io/projected/ed40ac73-afc2-4dae-9364-e6775923e031-kube-api-access-9kg5d\") pod \"apiserver-76f77b778f-6jk4g\" (UID: \"ed40ac73-afc2-4dae-9364-e6775923e031\") " pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.507285 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.520765 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.542661 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdjd4\" (UniqueName: \"kubernetes.io/projected/d4cd69fe-add0-427e-a129-cfb9cecb6887-kube-api-access-fdjd4\") pod \"controller-manager-879f6c89f-v2dsw\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.557750 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.563105 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v4w7\" (UniqueName: \"kubernetes.io/projected/a038e211-ffae-4e8b-9abf-8b32153b2c6d-kube-api-access-5v4w7\") pod \"openshift-apiserver-operator-796bbdcf4f-frbqs\" (UID: \"a038e211-ffae-4e8b-9abf-8b32153b2c6d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.568277 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.583565 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jvvpl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.589371 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fms2\" (UniqueName: \"kubernetes.io/projected/a809b012-e8e1-4061-8fcf-7c9083e5569d-kube-api-access-8fms2\") pod \"openshift-config-operator-7777fb866f-v9pk8\" (UID: \"a809b012-e8e1-4061-8fcf-7c9083e5569d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.602471 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfsbq\" (UniqueName: \"kubernetes.io/projected/96e29661-be19-4efb-8337-661e5af2c4a2-kube-api-access-rfsbq\") pod \"apiserver-7bbb656c7d-qx48m\" (UID: \"96e29661-be19-4efb-8337-661e5af2c4a2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.608332 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.623208 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.628943 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.648037 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.660934 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.668415 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.679509 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.688481 5024 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.712709 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.742439 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.748312 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.772979 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.800211 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.808565 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.839775 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.856449 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.857110 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.870887 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.888772 5024 request.go:700] Waited for 1.85556827s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.891337 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.911887 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.932091 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6jk4g"] Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.981723 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.981762 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.981804 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab8f76d6-5ca4-4197-b6df-87fe4d019383-serving-cert\") pod \"console-operator-58897d9998-vj7pt\" (UID: \"ab8f76d6-5ca4-4197-b6df-87fe4d019383\") " pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.981828 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/109070e7-9a47-4d07-843f-3dbccb271ecd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-v4b2j\" (UID: \"109070e7-9a47-4d07-843f-3dbccb271ecd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.981845 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61ee1d79-90be-4c28-b765-806f010f4665-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c7d2s\" (UID: \"61ee1d79-90be-4c28-b765-806f010f4665\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.981875 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1db444-6f12-4ac1-943f-b56efdbbb206-serving-cert\") pod \"route-controller-manager-6576b87f9c-qnhjg\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.981893 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-audit-policies\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.981923 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-registry-certificates\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.981951 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/109070e7-9a47-4d07-843f-3dbccb271ecd-config\") pod \"kube-apiserver-operator-766d6c64bb-v4b2j\" (UID: \"109070e7-9a47-4d07-843f-3dbccb271ecd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.981975 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8f76d6-5ca4-4197-b6df-87fe4d019383-config\") pod \"console-operator-58897d9998-vj7pt\" (UID: \"ab8f76d6-5ca4-4197-b6df-87fe4d019383\") " pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.981996 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27832198-1ba5-4c93-b41a-58a17dc734dd-serving-cert\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.982058 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbxt\" (UniqueName: \"kubernetes.io/projected/28f5e1d8-5fbc-4328-8783-78c3a2d2e53b-kube-api-access-qnbxt\") pod \"service-ca-operator-777779d784-xdqw9\" (UID: \"28f5e1d8-5fbc-4328-8783-78c3a2d2e53b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.982078 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61ee1d79-90be-4c28-b765-806f010f4665-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c7d2s\" (UID: \"61ee1d79-90be-4c28-b765-806f010f4665\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.982094 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.982183 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.982638 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b08a2e9-f0f2-4749-9728-941815d60da9-stats-auth\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.982969 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8103913c-f8ff-410d-8181-617787247ac0-machine-approver-tls\") pod \"machine-approver-56656f9798-gmrg6\" (UID: \"8103913c-f8ff-410d-8181-617787247ac0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.983009 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8103913c-f8ff-410d-8181-617787247ac0-config\") pod \"machine-approver-56656f9798-gmrg6\" (UID: \"8103913c-f8ff-410d-8181-617787247ac0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.983049 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5465\" (UniqueName: \"kubernetes.io/projected/8103913c-f8ff-410d-8181-617787247ac0-kube-api-access-s5465\") pod \"machine-approver-56656f9798-gmrg6\" (UID: \"8103913c-f8ff-410d-8181-617787247ac0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.983101 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27832198-1ba5-4c93-b41a-58a17dc734dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.983132 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8f36997-26e7-43a4-9507-afe1d393ee29-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6xbnl\" (UID: \"c8f36997-26e7-43a4-9507-afe1d393ee29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.983154 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.983199 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b08a2e9-f0f2-4749-9728-941815d60da9-metrics-certs\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.983256 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/231e7091-0809-44e9-9d1a-d5a1ea092a64-audit-dir\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.983304 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b08a2e9-f0f2-4749-9728-941815d60da9-default-certificate\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.983364 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/61ee1d79-90be-4c28-b765-806f010f4665-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c7d2s\" (UID: \"61ee1d79-90be-4c28-b765-806f010f4665\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.983415 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgqk7\" (UniqueName: \"kubernetes.io/projected/231e7091-0809-44e9-9d1a-d5a1ea092a64-kube-api-access-vgqk7\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.983893 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-bound-sa-token\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.984631 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zzx5\" (UniqueName: \"kubernetes.io/projected/ac1db444-6f12-4ac1-943f-b56efdbbb206-kube-api-access-7zzx5\") pod \"route-controller-manager-6576b87f9c-qnhjg\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.984671 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27832198-1ba5-4c93-b41a-58a17dc734dd-config\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.984697 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.984727 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac1db444-6f12-4ac1-943f-b56efdbbb206-client-ca\") pod \"route-controller-manager-6576b87f9c-qnhjg\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985113 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985452 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw2jn\" (UniqueName: \"kubernetes.io/projected/27832198-1ba5-4c93-b41a-58a17dc734dd-kube-api-access-rw2jn\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985478 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985516 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985554 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28f5e1d8-5fbc-4328-8783-78c3a2d2e53b-serving-cert\") pod \"service-ca-operator-777779d784-xdqw9\" (UID: \"28f5e1d8-5fbc-4328-8783-78c3a2d2e53b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985572 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4klj6\" (UniqueName: \"kubernetes.io/projected/9afc0a0f-ea3f-41c4-8196-85b09cca5655-kube-api-access-4klj6\") pod \"cluster-samples-operator-665b6dd947-s8v9n\" (UID: \"9afc0a0f-ea3f-41c4-8196-85b09cca5655\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985620 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985645 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab8f76d6-5ca4-4197-b6df-87fe4d019383-trusted-ca\") pod \"console-operator-58897d9998-vj7pt\" (UID: \"ab8f76d6-5ca4-4197-b6df-87fe4d019383\") " pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985664 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rwp6\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-kube-api-access-7rwp6\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985680 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985695 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr8hx\" (UniqueName: \"kubernetes.io/projected/7b08a2e9-f0f2-4749-9728-941815d60da9-kube-api-access-zr8hx\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985902 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-trusted-ca\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985928 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac1db444-6f12-4ac1-943f-b56efdbbb206-config\") pod \"route-controller-manager-6576b87f9c-qnhjg\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985952 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8103913c-f8ff-410d-8181-617787247ac0-auth-proxy-config\") pod \"machine-approver-56656f9798-gmrg6\" (UID: \"8103913c-f8ff-410d-8181-617787247ac0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" Nov 28 17:00:55 crc kubenswrapper[5024]: I1128 17:00:55.985971 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b08a2e9-f0f2-4749-9728-941815d60da9-service-ca-bundle\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:55 crc kubenswrapper[5024]: E1128 17:00:55.987841 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:56.487816839 +0000 UTC m=+158.536737824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:55.986014 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/109070e7-9a47-4d07-843f-3dbccb271ecd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-v4b2j\" (UID: \"109070e7-9a47-4d07-843f-3dbccb271ecd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.002185 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9afc0a0f-ea3f-41c4-8196-85b09cca5655-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s8v9n\" (UID: \"9afc0a0f-ea3f-41c4-8196-85b09cca5655\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.002246 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.002268 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kh9q\" (UniqueName: \"kubernetes.io/projected/ab8f76d6-5ca4-4197-b6df-87fe4d019383-kube-api-access-5kh9q\") pod \"console-operator-58897d9998-vj7pt\" (UID: \"ab8f76d6-5ca4-4197-b6df-87fe4d019383\") " pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.002286 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27832198-1ba5-4c93-b41a-58a17dc734dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.002309 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.002335 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-registry-tls\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.002357 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8f36997-26e7-43a4-9507-afe1d393ee29-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6xbnl\" (UID: \"c8f36997-26e7-43a4-9507-afe1d393ee29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.002377 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8f36997-26e7-43a4-9507-afe1d393ee29-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6xbnl\" (UID: \"c8f36997-26e7-43a4-9507-afe1d393ee29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.002399 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq8wm\" (UniqueName: \"kubernetes.io/projected/61ee1d79-90be-4c28-b765-806f010f4665-kube-api-access-fq8wm\") pod \"cluster-image-registry-operator-dc59b4c8b-c7d2s\" (UID: \"61ee1d79-90be-4c28-b765-806f010f4665\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.002417 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.002445 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f5e1d8-5fbc-4328-8783-78c3a2d2e53b-config\") pod \"service-ca-operator-777779d784-xdqw9\" (UID: \"28f5e1d8-5fbc-4328-8783-78c3a2d2e53b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.103629 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.103938 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.103975 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80a843cd-6141-431e-83c1-a7ce0110e31f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6p4ff\" (UID: \"80a843cd-6141-431e-83c1-a7ce0110e31f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.103995 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8103913c-f8ff-410d-8181-617787247ac0-machine-approver-tls\") pod \"machine-approver-56656f9798-gmrg6\" (UID: \"8103913c-f8ff-410d-8181-617787247ac0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.104065 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5465\" (UniqueName: \"kubernetes.io/projected/8103913c-f8ff-410d-8181-617787247ac0-kube-api-access-s5465\") pod \"machine-approver-56656f9798-gmrg6\" (UID: \"8103913c-f8ff-410d-8181-617787247ac0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" Nov 28 17:00:56 crc kubenswrapper[5024]: E1128 17:00:56.104098 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:56.604057797 +0000 UTC m=+158.652978742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.104176 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27832198-1ba5-4c93-b41a-58a17dc734dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.105541 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8f36997-26e7-43a4-9507-afe1d393ee29-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6xbnl\" (UID: \"c8f36997-26e7-43a4-9507-afe1d393ee29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.105607 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b08a2e9-f0f2-4749-9728-941815d60da9-metrics-certs\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.105646 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fphf6\" (UniqueName: \"kubernetes.io/projected/af33dff7-bbd3-42d1-9995-c5c008e56e01-kube-api-access-fphf6\") pod \"multus-admission-controller-857f4d67dd-4pxb8\" (UID: \"af33dff7-bbd3-42d1-9995-c5c008e56e01\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4pxb8" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.105810 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b08a2e9-f0f2-4749-9728-941815d60da9-default-certificate\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.105892 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/61ee1d79-90be-4c28-b765-806f010f4665-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c7d2s\" (UID: \"61ee1d79-90be-4c28-b765-806f010f4665\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.105966 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nk56\" (UniqueName: \"kubernetes.io/projected/73be74ad-f659-4b81-b809-266f951e4994-kube-api-access-5nk56\") pod \"olm-operator-6b444d44fb-ldx2f\" (UID: \"73be74ad-f659-4b81-b809-266f951e4994\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106070 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c63a391a-52c5-4121-b857-052c0962cf5a-webhook-cert\") pod \"packageserver-d55dfcdfc-4pgzf\" (UID: \"c63a391a-52c5-4121-b857-052c0962cf5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106131 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zzx5\" (UniqueName: \"kubernetes.io/projected/ac1db444-6f12-4ac1-943f-b56efdbbb206-kube-api-access-7zzx5\") pod \"route-controller-manager-6576b87f9c-qnhjg\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106161 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27832198-1ba5-4c93-b41a-58a17dc734dd-config\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106171 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27832198-1ba5-4c93-b41a-58a17dc734dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106213 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-mountpoint-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106242 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba788cd-c369-417f-a2b5-fb92019fc864-config\") pod \"kube-controller-manager-operator-78b949d7b-8f8nq\" (UID: \"aba788cd-c369-417f-a2b5-fb92019fc864\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106294 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac1db444-6f12-4ac1-943f-b56efdbbb206-client-ca\") pod \"route-controller-manager-6576b87f9c-qnhjg\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106329 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-secret-volume\") pod \"collect-profiles-29405820-dgz4c\" (UID: \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106379 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a471ed62-1700-448f-a592-568efaafca96-metrics-tls\") pod \"dns-operator-744455d44c-4bww5\" (UID: \"a471ed62-1700-448f-a592-568efaafca96\") " pod="openshift-dns-operator/dns-operator-744455d44c-4bww5" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106413 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18150739-785b-44d6-8d0b-6f73eb45e9a7-srv-cert\") pod \"catalog-operator-68c6474976-gthl8\" (UID: \"18150739-785b-44d6-8d0b-6f73eb45e9a7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106462 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/005aa4d7-4177-4a67-abeb-ff0c25b0ae9b-proxy-tls\") pod \"machine-config-controller-84d6567774-j485j\" (UID: \"005aa4d7-4177-4a67-abeb-ff0c25b0ae9b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106525 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aba788cd-c369-417f-a2b5-fb92019fc864-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8f8nq\" (UID: \"aba788cd-c369-417f-a2b5-fb92019fc864\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106560 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bd5edd56-6cd5-4268-8728-0ba97f2e5cca-metrics-tls\") pod \"ingress-operator-5b745b69d9-wn4qw\" (UID: \"bd5edd56-6cd5-4268-8728-0ba97f2e5cca\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106619 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28f5e1d8-5fbc-4328-8783-78c3a2d2e53b-serving-cert\") pod \"service-ca-operator-777779d784-xdqw9\" (UID: \"28f5e1d8-5fbc-4328-8783-78c3a2d2e53b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106650 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4klj6\" (UniqueName: \"kubernetes.io/projected/9afc0a0f-ea3f-41c4-8196-85b09cca5655-kube-api-access-4klj6\") pod \"cluster-samples-operator-665b6dd947-s8v9n\" (UID: \"9afc0a0f-ea3f-41c4-8196-85b09cca5655\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106773 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6t2c\" (UniqueName: \"kubernetes.io/projected/78b42959-ba28-4734-b550-04e7d70496b8-kube-api-access-h6t2c\") pod \"ingress-canary-zgtq6\" (UID: \"78b42959-ba28-4734-b550-04e7d70496b8\") " pod="openshift-ingress-canary/ingress-canary-zgtq6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106807 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9xc5\" (UniqueName: \"kubernetes.io/projected/80a843cd-6141-431e-83c1-a7ce0110e31f-kube-api-access-v9xc5\") pod \"marketplace-operator-79b997595-6p4ff\" (UID: \"80a843cd-6141-431e-83c1-a7ce0110e31f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106878 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106910 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab8f76d6-5ca4-4197-b6df-87fe4d019383-trusted-ca\") pod \"console-operator-58897d9998-vj7pt\" (UID: \"ab8f76d6-5ca4-4197-b6df-87fe4d019383\") " pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106958 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aba788cd-c369-417f-a2b5-fb92019fc864-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8f8nq\" (UID: \"aba788cd-c369-417f-a2b5-fb92019fc864\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.106988 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7kdr\" (UniqueName: \"kubernetes.io/projected/13151297-dd89-4f46-8614-04670773ad2b-kube-api-access-c7kdr\") pod \"dns-default-9jlxs\" (UID: \"13151297-dd89-4f46-8614-04670773ad2b\") " pod="openshift-dns/dns-default-9jlxs" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107056 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rwp6\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-kube-api-access-7rwp6\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107111 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107143 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09c095d1-717c-43f6-9022-f46530bac373-config\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107197 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxs4s\" (UniqueName: \"kubernetes.io/projected/8d5fa786-7bad-487b-8b04-53bc1849d41a-kube-api-access-nxs4s\") pod \"machine-config-server-t5nkh\" (UID: \"8d5fa786-7bad-487b-8b04-53bc1849d41a\") " pod="openshift-machine-config-operator/machine-config-server-t5nkh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107227 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1466f5c-7d00-415a-9a1a-d2f694a6ac17-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wkhmw\" (UID: \"f1466f5c-7d00-415a-9a1a-d2f694a6ac17\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107279 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/005aa4d7-4177-4a67-abeb-ff0c25b0ae9b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j485j\" (UID: \"005aa4d7-4177-4a67-abeb-ff0c25b0ae9b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107314 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8103913c-f8ff-410d-8181-617787247ac0-auth-proxy-config\") pod \"machine-approver-56656f9798-gmrg6\" (UID: \"8103913c-f8ff-410d-8181-617787247ac0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107366 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9afc0a0f-ea3f-41c4-8196-85b09cca5655-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s8v9n\" (UID: \"9afc0a0f-ea3f-41c4-8196-85b09cca5655\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107426 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d965e97-c291-48c5-9be5-188c921a0350-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfqrh\" (UID: \"0d965e97-c291-48c5-9be5-188c921a0350\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107457 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8f36997-26e7-43a4-9507-afe1d393ee29-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6xbnl\" (UID: \"c8f36997-26e7-43a4-9507-afe1d393ee29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107483 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq8wm\" (UniqueName: \"kubernetes.io/projected/61ee1d79-90be-4c28-b765-806f010f4665-kube-api-access-fq8wm\") pod \"cluster-image-registry-operator-dc59b4c8b-c7d2s\" (UID: \"61ee1d79-90be-4c28-b765-806f010f4665\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107528 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13151297-dd89-4f46-8614-04670773ad2b-config-volume\") pod \"dns-default-9jlxs\" (UID: \"13151297-dd89-4f46-8614-04670773ad2b\") " pod="openshift-dns/dns-default-9jlxs" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107556 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btnhg\" (UniqueName: \"kubernetes.io/projected/0ebab130-4c94-441a-90a2-a20310673821-kube-api-access-btnhg\") pod \"service-ca-9c57cc56f-dqkhr\" (UID: \"0ebab130-4c94-441a-90a2-a20310673821\") " pod="openshift-service-ca/service-ca-9c57cc56f-dqkhr" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107604 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09ca800c-f2da-4db9-8570-a3605b84835e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prxwd\" (UID: \"09ca800c-f2da-4db9-8570-a3605b84835e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107630 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/73be74ad-f659-4b81-b809-266f951e4994-srv-cert\") pod \"olm-operator-6b444d44fb-ldx2f\" (UID: \"73be74ad-f659-4b81-b809-266f951e4994\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107677 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd5edd56-6cd5-4268-8728-0ba97f2e5cca-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wn4qw\" (UID: \"bd5edd56-6cd5-4268-8728-0ba97f2e5cca\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107713 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab8f76d6-5ca4-4197-b6df-87fe4d019383-serving-cert\") pod \"console-operator-58897d9998-vj7pt\" (UID: \"ab8f76d6-5ca4-4197-b6df-87fe4d019383\") " pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107760 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af33dff7-bbd3-42d1-9995-c5c008e56e01-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4pxb8\" (UID: \"af33dff7-bbd3-42d1-9995-c5c008e56e01\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4pxb8" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.107788 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd5edd56-6cd5-4268-8728-0ba97f2e5cca-trusted-ca\") pod \"ingress-operator-5b745b69d9-wn4qw\" (UID: \"bd5edd56-6cd5-4268-8728-0ba97f2e5cca\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108236 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09c095d1-717c-43f6-9022-f46530bac373-etcd-service-ca\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108299 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1db444-6f12-4ac1-943f-b56efdbbb206-serving-cert\") pod \"route-controller-manager-6576b87f9c-qnhjg\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108330 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c63a391a-52c5-4121-b857-052c0962cf5a-apiservice-cert\") pod \"packageserver-d55dfcdfc-4pgzf\" (UID: \"c63a391a-52c5-4121-b857-052c0962cf5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108382 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-registry-certificates\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108408 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfdvz\" (UniqueName: \"kubernetes.io/projected/c63a391a-52c5-4121-b857-052c0962cf5a-kube-api-access-mfdvz\") pod \"packageserver-d55dfcdfc-4pgzf\" (UID: \"c63a391a-52c5-4121-b857-052c0962cf5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108456 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09c095d1-717c-43f6-9022-f46530bac373-etcd-client\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108482 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78b42959-ba28-4734-b550-04e7d70496b8-cert\") pod \"ingress-canary-zgtq6\" (UID: \"78b42959-ba28-4734-b550-04e7d70496b8\") " pod="openshift-ingress-canary/ingress-canary-zgtq6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108510 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv2g8\" (UniqueName: \"kubernetes.io/projected/0d965e97-c291-48c5-9be5-188c921a0350-kube-api-access-fv2g8\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfqrh\" (UID: \"0d965e97-c291-48c5-9be5-188c921a0350\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108574 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27832198-1ba5-4c93-b41a-58a17dc734dd-serving-cert\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108622 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09ca800c-f2da-4db9-8570-a3605b84835e-proxy-tls\") pod \"machine-config-operator-74547568cd-prxwd\" (UID: \"09ca800c-f2da-4db9-8570-a3605b84835e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108656 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61ee1d79-90be-4c28-b765-806f010f4665-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c7d2s\" (UID: \"61ee1d79-90be-4c28-b765-806f010f4665\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108700 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108725 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b08a2e9-f0f2-4749-9728-941815d60da9-stats-auth\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108749 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-socket-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108800 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10abaa97-056b-4cd6-adbb-36b64dcef7cd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2cw8g\" (UID: \"10abaa97-056b-4cd6-adbb-36b64dcef7cd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cw8g" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108830 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/80a843cd-6141-431e-83c1-a7ce0110e31f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6p4ff\" (UID: \"80a843cd-6141-431e-83c1-a7ce0110e31f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108877 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1466f5c-7d00-415a-9a1a-d2f694a6ac17-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wkhmw\" (UID: \"f1466f5c-7d00-415a-9a1a-d2f694a6ac17\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108909 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8103913c-f8ff-410d-8181-617787247ac0-config\") pod \"machine-approver-56656f9798-gmrg6\" (UID: \"8103913c-f8ff-410d-8181-617787247ac0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108958 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.108989 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18150739-785b-44d6-8d0b-6f73eb45e9a7-profile-collector-cert\") pod \"catalog-operator-68c6474976-gthl8\" (UID: \"18150739-785b-44d6-8d0b-6f73eb45e9a7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.109053 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/231e7091-0809-44e9-9d1a-d5a1ea092a64-audit-dir\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.109114 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgqk7\" (UniqueName: \"kubernetes.io/projected/231e7091-0809-44e9-9d1a-d5a1ea092a64-kube-api-access-vgqk7\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.109681 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/231e7091-0809-44e9-9d1a-d5a1ea092a64-audit-dir\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.111627 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/61ee1d79-90be-4c28-b765-806f010f4665-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c7d2s\" (UID: \"61ee1d79-90be-4c28-b765-806f010f4665\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.111834 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8f36997-26e7-43a4-9507-afe1d393ee29-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6xbnl\" (UID: \"c8f36997-26e7-43a4-9507-afe1d393ee29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.112403 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27832198-1ba5-4c93-b41a-58a17dc734dd-config\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.112786 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-registry-certificates\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.112869 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d965e97-c291-48c5-9be5-188c921a0350-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfqrh\" (UID: \"0d965e97-c291-48c5-9be5-188c921a0350\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.112908 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09c095d1-717c-43f6-9022-f46530bac373-serving-cert\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.112960 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-bound-sa-token\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: E1128 17:00:56.112998 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:56.612973151 +0000 UTC m=+158.661894056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.113146 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-plugins-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.113245 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.113932 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8103913c-f8ff-410d-8181-617787247ac0-machine-approver-tls\") pod \"machine-approver-56656f9798-gmrg6\" (UID: \"8103913c-f8ff-410d-8181-617787247ac0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.114184 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.114218 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55jqj\" (UniqueName: \"kubernetes.io/projected/005aa4d7-4177-4a67-abeb-ff0c25b0ae9b-kube-api-access-55jqj\") pod \"machine-config-controller-84d6567774-j485j\" (UID: \"005aa4d7-4177-4a67-abeb-ff0c25b0ae9b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.114258 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13151297-dd89-4f46-8614-04670773ad2b-metrics-tls\") pod \"dns-default-9jlxs\" (UID: \"13151297-dd89-4f46-8614-04670773ad2b\") " pod="openshift-dns/dns-default-9jlxs" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.114267 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8103913c-f8ff-410d-8181-617787247ac0-config\") pod \"machine-approver-56656f9798-gmrg6\" (UID: \"8103913c-f8ff-410d-8181-617787247ac0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.115048 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw2jn\" (UniqueName: \"kubernetes.io/projected/27832198-1ba5-4c93-b41a-58a17dc734dd-kube-api-access-rw2jn\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.115094 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.115124 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwzh4\" (UniqueName: \"kubernetes.io/projected/09ca800c-f2da-4db9-8570-a3605b84835e-kube-api-access-dwzh4\") pod \"machine-config-operator-74547568cd-prxwd\" (UID: \"09ca800c-f2da-4db9-8570-a3605b84835e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.115151 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdv6n\" (UniqueName: \"kubernetes.io/projected/eb689c5a-3342-4dd0-ba63-30477d447ac4-kube-api-access-gdv6n\") pod \"migrator-59844c95c7-x56ns\" (UID: \"eb689c5a-3342-4dd0-ba63-30477d447ac4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x56ns" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.115252 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.115704 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.115790 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61ee1d79-90be-4c28-b765-806f010f4665-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c7d2s\" (UID: \"61ee1d79-90be-4c28-b765-806f010f4665\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.115815 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab8f76d6-5ca4-4197-b6df-87fe4d019383-trusted-ca\") pod \"console-operator-58897d9998-vj7pt\" (UID: \"ab8f76d6-5ca4-4197-b6df-87fe4d019383\") " pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.116170 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.116268 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c44k\" (UniqueName: \"kubernetes.io/projected/09c095d1-717c-43f6-9022-f46530bac373-kube-api-access-9c44k\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.116326 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/73be74ad-f659-4b81-b809-266f951e4994-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ldx2f\" (UID: \"73be74ad-f659-4b81-b809-266f951e4994\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.116354 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcv79\" (UniqueName: \"kubernetes.io/projected/bd5edd56-6cd5-4268-8728-0ba97f2e5cca-kube-api-access-hcv79\") pod \"ingress-operator-5b745b69d9-wn4qw\" (UID: \"bd5edd56-6cd5-4268-8728-0ba97f2e5cca\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.116368 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8103913c-f8ff-410d-8181-617787247ac0-auth-proxy-config\") pod \"machine-approver-56656f9798-gmrg6\" (UID: \"8103913c-f8ff-410d-8181-617787247ac0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.116419 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr8hx\" (UniqueName: \"kubernetes.io/projected/7b08a2e9-f0f2-4749-9728-941815d60da9-kube-api-access-zr8hx\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.116450 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjnj9\" (UniqueName: \"kubernetes.io/projected/f1466f5c-7d00-415a-9a1a-d2f694a6ac17-kube-api-access-tjnj9\") pod \"openshift-controller-manager-operator-756b6f6bc6-wkhmw\" (UID: \"f1466f5c-7d00-415a-9a1a-d2f694a6ac17\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.116527 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-trusted-ca\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.116565 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac1db444-6f12-4ac1-943f-b56efdbbb206-config\") pod \"route-controller-manager-6576b87f9c-qnhjg\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.116688 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1db444-6f12-4ac1-943f-b56efdbbb206-serving-cert\") pod \"route-controller-manager-6576b87f9c-qnhjg\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.116698 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-config-volume\") pod \"collect-profiles-29405820-dgz4c\" (UID: \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.116738 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kzncc\" (UID: \"46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.117123 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac1db444-6f12-4ac1-943f-b56efdbbb206-client-ca\") pod \"route-controller-manager-6576b87f9c-qnhjg\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.117221 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0ebab130-4c94-441a-90a2-a20310673821-signing-key\") pod \"service-ca-9c57cc56f-dqkhr\" (UID: \"0ebab130-4c94-441a-90a2-a20310673821\") " pod="openshift-service-ca/service-ca-9c57cc56f-dqkhr" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.117302 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b08a2e9-f0f2-4749-9728-941815d60da9-service-ca-bundle\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.117337 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8d5fa786-7bad-487b-8b04-53bc1849d41a-node-bootstrap-token\") pod \"machine-config-server-t5nkh\" (UID: \"8d5fa786-7bad-487b-8b04-53bc1849d41a\") " pod="openshift-machine-config-operator/machine-config-server-t5nkh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.117414 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/109070e7-9a47-4d07-843f-3dbccb271ecd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-v4b2j\" (UID: \"109070e7-9a47-4d07-843f-3dbccb271ecd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.117467 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-registry-tls\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.117501 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.117532 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kh9q\" (UniqueName: \"kubernetes.io/projected/ab8f76d6-5ca4-4197-b6df-87fe4d019383-kube-api-access-5kh9q\") pod \"console-operator-58897d9998-vj7pt\" (UID: \"ab8f76d6-5ca4-4197-b6df-87fe4d019383\") " pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.117571 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27832198-1ba5-4c93-b41a-58a17dc734dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.117579 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-trusted-ca\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.117603 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.117902 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac1db444-6f12-4ac1-943f-b56efdbbb206-config\") pod \"route-controller-manager-6576b87f9c-qnhjg\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.118686 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b08a2e9-f0f2-4749-9728-941815d60da9-service-ca-bundle\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.118732 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6874\" (UniqueName: \"kubernetes.io/projected/a471ed62-1700-448f-a592-568efaafca96-kube-api-access-v6874\") pod \"dns-operator-744455d44c-4bww5\" (UID: \"a471ed62-1700-448f-a592-568efaafca96\") " pod="openshift-dns-operator/dns-operator-744455d44c-4bww5" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.118776 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f5e1d8-5fbc-4328-8783-78c3a2d2e53b-config\") pod \"service-ca-operator-777779d784-xdqw9\" (UID: \"28f5e1d8-5fbc-4328-8783-78c3a2d2e53b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.118804 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8f36997-26e7-43a4-9507-afe1d393ee29-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6xbnl\" (UID: \"c8f36997-26e7-43a4-9507-afe1d393ee29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.118861 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.118894 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09ca800c-f2da-4db9-8570-a3605b84835e-images\") pod \"machine-config-operator-74547568cd-prxwd\" (UID: \"09ca800c-f2da-4db9-8570-a3605b84835e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.118936 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzpc9\" (UniqueName: \"kubernetes.io/projected/ecaf8d7e-7f08-44c9-b980-db9180876825-kube-api-access-pzpc9\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.118966 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.119058 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.119088 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c63a391a-52c5-4121-b857-052c0962cf5a-tmpfs\") pod \"packageserver-d55dfcdfc-4pgzf\" (UID: \"c63a391a-52c5-4121-b857-052c0962cf5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.119114 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46lrf\" (UniqueName: \"kubernetes.io/projected/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-kube-api-access-46lrf\") pod \"collect-profiles-29405820-dgz4c\" (UID: \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.119259 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-csi-data-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.119285 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7n8k\" (UniqueName: \"kubernetes.io/projected/10abaa97-056b-4cd6-adbb-36b64dcef7cd-kube-api-access-s7n8k\") pod \"control-plane-machine-set-operator-78cbb6b69f-2cw8g\" (UID: \"10abaa97-056b-4cd6-adbb-36b64dcef7cd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cw8g" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.119456 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27832198-1ba5-4c93-b41a-58a17dc734dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.119923 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/109070e7-9a47-4d07-843f-3dbccb271ecd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-v4b2j\" (UID: \"109070e7-9a47-4d07-843f-3dbccb271ecd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.121169 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9afc0a0f-ea3f-41c4-8196-85b09cca5655-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s8v9n\" (UID: \"9afc0a0f-ea3f-41c4-8196-85b09cca5655\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.121226 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28f5e1d8-5fbc-4328-8783-78c3a2d2e53b-config\") pod \"service-ca-operator-777779d784-xdqw9\" (UID: \"28f5e1d8-5fbc-4328-8783-78c3a2d2e53b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.121276 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.121766 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8f36997-26e7-43a4-9507-afe1d393ee29-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6xbnl\" (UID: \"c8f36997-26e7-43a4-9507-afe1d393ee29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.121850 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61ee1d79-90be-4c28-b765-806f010f4665-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c7d2s\" (UID: \"61ee1d79-90be-4c28-b765-806f010f4665\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.121895 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-audit-policies\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.121926 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8d5fa786-7bad-487b-8b04-53bc1849d41a-certs\") pod \"machine-config-server-t5nkh\" (UID: \"8d5fa786-7bad-487b-8b04-53bc1849d41a\") " pod="openshift-machine-config-operator/machine-config-server-t5nkh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.123480 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4pt4\" (UniqueName: \"kubernetes.io/projected/46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1-kube-api-access-m4pt4\") pod \"package-server-manager-789f6589d5-kzncc\" (UID: \"46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.123539 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/109070e7-9a47-4d07-843f-3dbccb271ecd-config\") pod \"kube-apiserver-operator-766d6c64bb-v4b2j\" (UID: \"109070e7-9a47-4d07-843f-3dbccb271ecd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.123560 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnd2b\" (UniqueName: \"kubernetes.io/projected/18150739-785b-44d6-8d0b-6f73eb45e9a7-kube-api-access-vnd2b\") pod \"catalog-operator-68c6474976-gthl8\" (UID: \"18150739-785b-44d6-8d0b-6f73eb45e9a7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.123612 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0ebab130-4c94-441a-90a2-a20310673821-signing-cabundle\") pod \"service-ca-9c57cc56f-dqkhr\" (UID: \"0ebab130-4c94-441a-90a2-a20310673821\") " pod="openshift-service-ca/service-ca-9c57cc56f-dqkhr" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.123641 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8f76d6-5ca4-4197-b6df-87fe4d019383-config\") pod \"console-operator-58897d9998-vj7pt\" (UID: \"ab8f76d6-5ca4-4197-b6df-87fe4d019383\") " pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.123663 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09c095d1-717c-43f6-9022-f46530bac373-etcd-ca\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.123687 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-registration-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.124084 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnbxt\" (UniqueName: \"kubernetes.io/projected/28f5e1d8-5fbc-4328-8783-78c3a2d2e53b-kube-api-access-qnbxt\") pod \"service-ca-operator-777779d784-xdqw9\" (UID: \"28f5e1d8-5fbc-4328-8783-78c3a2d2e53b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.124361 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.124978 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/109070e7-9a47-4d07-843f-3dbccb271ecd-config\") pod \"kube-apiserver-operator-766d6c64bb-v4b2j\" (UID: \"109070e7-9a47-4d07-843f-3dbccb271ecd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.125336 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27832198-1ba5-4c93-b41a-58a17dc734dd-serving-cert\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.125915 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8f76d6-5ca4-4197-b6df-87fe4d019383-config\") pod \"console-operator-58897d9998-vj7pt\" (UID: \"ab8f76d6-5ca4-4197-b6df-87fe4d019383\") " pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.126450 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab8f76d6-5ca4-4197-b6df-87fe4d019383-serving-cert\") pod \"console-operator-58897d9998-vj7pt\" (UID: \"ab8f76d6-5ca4-4197-b6df-87fe4d019383\") " pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.133588 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28f5e1d8-5fbc-4328-8783-78c3a2d2e53b-serving-cert\") pod \"service-ca-operator-777779d784-xdqw9\" (UID: \"28f5e1d8-5fbc-4328-8783-78c3a2d2e53b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.142554 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-audit-policies\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.142594 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.145277 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b08a2e9-f0f2-4749-9728-941815d60da9-metrics-certs\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.146358 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.160125 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b08a2e9-f0f2-4749-9728-941815d60da9-stats-auth\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.160155 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-registry-tls\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.166131 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.183121 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4klj6\" (UniqueName: \"kubernetes.io/projected/9afc0a0f-ea3f-41c4-8196-85b09cca5655-kube-api-access-4klj6\") pod \"cluster-samples-operator-665b6dd947-s8v9n\" (UID: \"9afc0a0f-ea3f-41c4-8196-85b09cca5655\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.203882 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zzx5\" (UniqueName: \"kubernetes.io/projected/ac1db444-6f12-4ac1-943f-b56efdbbb206-kube-api-access-7zzx5\") pod \"route-controller-manager-6576b87f9c-qnhjg\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.222859 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq8wm\" (UniqueName: \"kubernetes.io/projected/61ee1d79-90be-4c28-b765-806f010f4665-kube-api-access-fq8wm\") pod \"cluster-image-registry-operator-dc59b4c8b-c7d2s\" (UID: \"61ee1d79-90be-4c28-b765-806f010f4665\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.223546 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.223832 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.224970 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.225459 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.225633 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-secret-volume\") pod \"collect-profiles-29405820-dgz4c\" (UID: \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.225652 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a471ed62-1700-448f-a592-568efaafca96-metrics-tls\") pod \"dns-operator-744455d44c-4bww5\" (UID: \"a471ed62-1700-448f-a592-568efaafca96\") " pod="openshift-dns-operator/dns-operator-744455d44c-4bww5" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.225673 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18150739-785b-44d6-8d0b-6f73eb45e9a7-srv-cert\") pod \"catalog-operator-68c6474976-gthl8\" (UID: \"18150739-785b-44d6-8d0b-6f73eb45e9a7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" Nov 28 17:00:56 crc kubenswrapper[5024]: E1128 17:00:56.225748 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:56.725685978 +0000 UTC m=+158.774606883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.225802 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/005aa4d7-4177-4a67-abeb-ff0c25b0ae9b-proxy-tls\") pod \"machine-config-controller-84d6567774-j485j\" (UID: \"005aa4d7-4177-4a67-abeb-ff0c25b0ae9b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.225903 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aba788cd-c369-417f-a2b5-fb92019fc864-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8f8nq\" (UID: \"aba788cd-c369-417f-a2b5-fb92019fc864\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.225941 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bd5edd56-6cd5-4268-8728-0ba97f2e5cca-metrics-tls\") pod \"ingress-operator-5b745b69d9-wn4qw\" (UID: \"bd5edd56-6cd5-4268-8728-0ba97f2e5cca\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.225976 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6t2c\" (UniqueName: \"kubernetes.io/projected/78b42959-ba28-4734-b550-04e7d70496b8-kube-api-access-h6t2c\") pod \"ingress-canary-zgtq6\" (UID: \"78b42959-ba28-4734-b550-04e7d70496b8\") " pod="openshift-ingress-canary/ingress-canary-zgtq6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.225998 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9xc5\" (UniqueName: \"kubernetes.io/projected/80a843cd-6141-431e-83c1-a7ce0110e31f-kube-api-access-v9xc5\") pod \"marketplace-operator-79b997595-6p4ff\" (UID: \"80a843cd-6141-431e-83c1-a7ce0110e31f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226047 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226071 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aba788cd-c369-417f-a2b5-fb92019fc864-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8f8nq\" (UID: \"aba788cd-c369-417f-a2b5-fb92019fc864\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226092 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7kdr\" (UniqueName: \"kubernetes.io/projected/13151297-dd89-4f46-8614-04670773ad2b-kube-api-access-c7kdr\") pod \"dns-default-9jlxs\" (UID: \"13151297-dd89-4f46-8614-04670773ad2b\") " pod="openshift-dns/dns-default-9jlxs" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226135 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxs4s\" (UniqueName: \"kubernetes.io/projected/8d5fa786-7bad-487b-8b04-53bc1849d41a-kube-api-access-nxs4s\") pod \"machine-config-server-t5nkh\" (UID: \"8d5fa786-7bad-487b-8b04-53bc1849d41a\") " pod="openshift-machine-config-operator/machine-config-server-t5nkh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226156 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1466f5c-7d00-415a-9a1a-d2f694a6ac17-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wkhmw\" (UID: \"f1466f5c-7d00-415a-9a1a-d2f694a6ac17\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226183 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09c095d1-717c-43f6-9022-f46530bac373-config\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226204 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/005aa4d7-4177-4a67-abeb-ff0c25b0ae9b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j485j\" (UID: \"005aa4d7-4177-4a67-abeb-ff0c25b0ae9b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226235 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d965e97-c291-48c5-9be5-188c921a0350-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfqrh\" (UID: \"0d965e97-c291-48c5-9be5-188c921a0350\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226261 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13151297-dd89-4f46-8614-04670773ad2b-config-volume\") pod \"dns-default-9jlxs\" (UID: \"13151297-dd89-4f46-8614-04670773ad2b\") " pod="openshift-dns/dns-default-9jlxs" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226289 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09ca800c-f2da-4db9-8570-a3605b84835e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prxwd\" (UID: \"09ca800c-f2da-4db9-8570-a3605b84835e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226314 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btnhg\" (UniqueName: \"kubernetes.io/projected/0ebab130-4c94-441a-90a2-a20310673821-kube-api-access-btnhg\") pod \"service-ca-9c57cc56f-dqkhr\" (UID: \"0ebab130-4c94-441a-90a2-a20310673821\") " pod="openshift-service-ca/service-ca-9c57cc56f-dqkhr" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226337 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd5edd56-6cd5-4268-8728-0ba97f2e5cca-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wn4qw\" (UID: \"bd5edd56-6cd5-4268-8728-0ba97f2e5cca\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226357 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/73be74ad-f659-4b81-b809-266f951e4994-srv-cert\") pod \"olm-operator-6b444d44fb-ldx2f\" (UID: \"73be74ad-f659-4b81-b809-266f951e4994\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226384 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af33dff7-bbd3-42d1-9995-c5c008e56e01-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4pxb8\" (UID: \"af33dff7-bbd3-42d1-9995-c5c008e56e01\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4pxb8" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226405 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd5edd56-6cd5-4268-8728-0ba97f2e5cca-trusted-ca\") pod \"ingress-operator-5b745b69d9-wn4qw\" (UID: \"bd5edd56-6cd5-4268-8728-0ba97f2e5cca\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226429 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09c095d1-717c-43f6-9022-f46530bac373-etcd-service-ca\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226452 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c63a391a-52c5-4121-b857-052c0962cf5a-apiservice-cert\") pod \"packageserver-d55dfcdfc-4pgzf\" (UID: \"c63a391a-52c5-4121-b857-052c0962cf5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226475 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfdvz\" (UniqueName: \"kubernetes.io/projected/c63a391a-52c5-4121-b857-052c0962cf5a-kube-api-access-mfdvz\") pod \"packageserver-d55dfcdfc-4pgzf\" (UID: \"c63a391a-52c5-4121-b857-052c0962cf5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226493 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09c095d1-717c-43f6-9022-f46530bac373-etcd-client\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226511 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78b42959-ba28-4734-b550-04e7d70496b8-cert\") pod \"ingress-canary-zgtq6\" (UID: \"78b42959-ba28-4734-b550-04e7d70496b8\") " pod="openshift-ingress-canary/ingress-canary-zgtq6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226534 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv2g8\" (UniqueName: \"kubernetes.io/projected/0d965e97-c291-48c5-9be5-188c921a0350-kube-api-access-fv2g8\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfqrh\" (UID: \"0d965e97-c291-48c5-9be5-188c921a0350\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226559 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09ca800c-f2da-4db9-8570-a3605b84835e-proxy-tls\") pod \"machine-config-operator-74547568cd-prxwd\" (UID: \"09ca800c-f2da-4db9-8570-a3605b84835e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226598 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-socket-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226638 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1466f5c-7d00-415a-9a1a-d2f694a6ac17-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wkhmw\" (UID: \"f1466f5c-7d00-415a-9a1a-d2f694a6ac17\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226661 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10abaa97-056b-4cd6-adbb-36b64dcef7cd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2cw8g\" (UID: \"10abaa97-056b-4cd6-adbb-36b64dcef7cd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cw8g" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226681 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/80a843cd-6141-431e-83c1-a7ce0110e31f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6p4ff\" (UID: \"80a843cd-6141-431e-83c1-a7ce0110e31f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226705 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18150739-785b-44d6-8d0b-6f73eb45e9a7-profile-collector-cert\") pod \"catalog-operator-68c6474976-gthl8\" (UID: \"18150739-785b-44d6-8d0b-6f73eb45e9a7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226739 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d965e97-c291-48c5-9be5-188c921a0350-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfqrh\" (UID: \"0d965e97-c291-48c5-9be5-188c921a0350\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226760 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09c095d1-717c-43f6-9022-f46530bac373-serving-cert\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226790 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-plugins-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226887 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55jqj\" (UniqueName: \"kubernetes.io/projected/005aa4d7-4177-4a67-abeb-ff0c25b0ae9b-kube-api-access-55jqj\") pod \"machine-config-controller-84d6567774-j485j\" (UID: \"005aa4d7-4177-4a67-abeb-ff0c25b0ae9b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226910 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13151297-dd89-4f46-8614-04670773ad2b-metrics-tls\") pod \"dns-default-9jlxs\" (UID: \"13151297-dd89-4f46-8614-04670773ad2b\") " pod="openshift-dns/dns-default-9jlxs" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226966 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwzh4\" (UniqueName: \"kubernetes.io/projected/09ca800c-f2da-4db9-8570-a3605b84835e-kube-api-access-dwzh4\") pod \"machine-config-operator-74547568cd-prxwd\" (UID: \"09ca800c-f2da-4db9-8570-a3605b84835e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.226985 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdv6n\" (UniqueName: \"kubernetes.io/projected/eb689c5a-3342-4dd0-ba63-30477d447ac4-kube-api-access-gdv6n\") pod \"migrator-59844c95c7-x56ns\" (UID: \"eb689c5a-3342-4dd0-ba63-30477d447ac4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x56ns" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227013 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c44k\" (UniqueName: \"kubernetes.io/projected/09c095d1-717c-43f6-9022-f46530bac373-kube-api-access-9c44k\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227051 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/73be74ad-f659-4b81-b809-266f951e4994-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ldx2f\" (UID: \"73be74ad-f659-4b81-b809-266f951e4994\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227071 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcv79\" (UniqueName: \"kubernetes.io/projected/bd5edd56-6cd5-4268-8728-0ba97f2e5cca-kube-api-access-hcv79\") pod \"ingress-operator-5b745b69d9-wn4qw\" (UID: \"bd5edd56-6cd5-4268-8728-0ba97f2e5cca\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227091 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjnj9\" (UniqueName: \"kubernetes.io/projected/f1466f5c-7d00-415a-9a1a-d2f694a6ac17-kube-api-access-tjnj9\") pod \"openshift-controller-manager-operator-756b6f6bc6-wkhmw\" (UID: \"f1466f5c-7d00-415a-9a1a-d2f694a6ac17\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227122 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-config-volume\") pod \"collect-profiles-29405820-dgz4c\" (UID: \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227143 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kzncc\" (UID: \"46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227166 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0ebab130-4c94-441a-90a2-a20310673821-signing-key\") pod \"service-ca-9c57cc56f-dqkhr\" (UID: \"0ebab130-4c94-441a-90a2-a20310673821\") " pod="openshift-service-ca/service-ca-9c57cc56f-dqkhr" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227188 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8d5fa786-7bad-487b-8b04-53bc1849d41a-node-bootstrap-token\") pod \"machine-config-server-t5nkh\" (UID: \"8d5fa786-7bad-487b-8b04-53bc1849d41a\") " pod="openshift-machine-config-operator/machine-config-server-t5nkh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227235 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6874\" (UniqueName: \"kubernetes.io/projected/a471ed62-1700-448f-a592-568efaafca96-kube-api-access-v6874\") pod \"dns-operator-744455d44c-4bww5\" (UID: \"a471ed62-1700-448f-a592-568efaafca96\") " pod="openshift-dns-operator/dns-operator-744455d44c-4bww5" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227265 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09ca800c-f2da-4db9-8570-a3605b84835e-images\") pod \"machine-config-operator-74547568cd-prxwd\" (UID: \"09ca800c-f2da-4db9-8570-a3605b84835e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227284 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzpc9\" (UniqueName: \"kubernetes.io/projected/ecaf8d7e-7f08-44c9-b980-db9180876825-kube-api-access-pzpc9\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227308 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-csi-data-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227329 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7n8k\" (UniqueName: \"kubernetes.io/projected/10abaa97-056b-4cd6-adbb-36b64dcef7cd-kube-api-access-s7n8k\") pod \"control-plane-machine-set-operator-78cbb6b69f-2cw8g\" (UID: \"10abaa97-056b-4cd6-adbb-36b64dcef7cd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cw8g" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227351 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c63a391a-52c5-4121-b857-052c0962cf5a-tmpfs\") pod \"packageserver-d55dfcdfc-4pgzf\" (UID: \"c63a391a-52c5-4121-b857-052c0962cf5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227371 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46lrf\" (UniqueName: \"kubernetes.io/projected/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-kube-api-access-46lrf\") pod \"collect-profiles-29405820-dgz4c\" (UID: \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227410 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8d5fa786-7bad-487b-8b04-53bc1849d41a-certs\") pod \"machine-config-server-t5nkh\" (UID: \"8d5fa786-7bad-487b-8b04-53bc1849d41a\") " pod="openshift-machine-config-operator/machine-config-server-t5nkh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227433 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4pt4\" (UniqueName: \"kubernetes.io/projected/46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1-kube-api-access-m4pt4\") pod \"package-server-manager-789f6589d5-kzncc\" (UID: \"46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227454 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnd2b\" (UniqueName: \"kubernetes.io/projected/18150739-785b-44d6-8d0b-6f73eb45e9a7-kube-api-access-vnd2b\") pod \"catalog-operator-68c6474976-gthl8\" (UID: \"18150739-785b-44d6-8d0b-6f73eb45e9a7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227472 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0ebab130-4c94-441a-90a2-a20310673821-signing-cabundle\") pod \"service-ca-9c57cc56f-dqkhr\" (UID: \"0ebab130-4c94-441a-90a2-a20310673821\") " pod="openshift-service-ca/service-ca-9c57cc56f-dqkhr" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227493 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-registration-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227513 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09c095d1-717c-43f6-9022-f46530bac373-etcd-ca\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227541 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80a843cd-6141-431e-83c1-a7ce0110e31f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6p4ff\" (UID: \"80a843cd-6141-431e-83c1-a7ce0110e31f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227574 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fphf6\" (UniqueName: \"kubernetes.io/projected/af33dff7-bbd3-42d1-9995-c5c008e56e01-kube-api-access-fphf6\") pod \"multus-admission-controller-857f4d67dd-4pxb8\" (UID: \"af33dff7-bbd3-42d1-9995-c5c008e56e01\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4pxb8" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227606 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nk56\" (UniqueName: \"kubernetes.io/projected/73be74ad-f659-4b81-b809-266f951e4994-kube-api-access-5nk56\") pod \"olm-operator-6b444d44fb-ldx2f\" (UID: \"73be74ad-f659-4b81-b809-266f951e4994\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227626 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c63a391a-52c5-4121-b857-052c0962cf5a-webhook-cert\") pod \"packageserver-d55dfcdfc-4pgzf\" (UID: \"c63a391a-52c5-4121-b857-052c0962cf5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227649 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-mountpoint-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.227668 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba788cd-c369-417f-a2b5-fb92019fc864-config\") pod \"kube-controller-manager-operator-78b949d7b-8f8nq\" (UID: \"aba788cd-c369-417f-a2b5-fb92019fc864\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.228558 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba788cd-c369-417f-a2b5-fb92019fc864-config\") pod \"kube-controller-manager-operator-78b949d7b-8f8nq\" (UID: \"aba788cd-c369-417f-a2b5-fb92019fc864\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.228830 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-secret-volume\") pod \"collect-profiles-29405820-dgz4c\" (UID: \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.229932 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18150739-785b-44d6-8d0b-6f73eb45e9a7-srv-cert\") pod \"catalog-operator-68c6474976-gthl8\" (UID: \"18150739-785b-44d6-8d0b-6f73eb45e9a7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.230849 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a471ed62-1700-448f-a592-568efaafca96-metrics-tls\") pod \"dns-operator-744455d44c-4bww5\" (UID: \"a471ed62-1700-448f-a592-568efaafca96\") " pod="openshift-dns-operator/dns-operator-744455d44c-4bww5" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.233327 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/005aa4d7-4177-4a67-abeb-ff0c25b0ae9b-proxy-tls\") pod \"machine-config-controller-84d6567774-j485j\" (UID: \"005aa4d7-4177-4a67-abeb-ff0c25b0ae9b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.233674 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bd5edd56-6cd5-4268-8728-0ba97f2e5cca-metrics-tls\") pod \"ingress-operator-5b745b69d9-wn4qw\" (UID: \"bd5edd56-6cd5-4268-8728-0ba97f2e5cca\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.233904 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/80a843cd-6141-431e-83c1-a7ce0110e31f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6p4ff\" (UID: \"80a843cd-6141-431e-83c1-a7ce0110e31f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:00:56 crc kubenswrapper[5024]: E1128 17:00:56.234463 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:56.734447727 +0000 UTC m=+158.783368632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.234686 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-config-volume\") pod \"collect-profiles-29405820-dgz4c\" (UID: \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.236147 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10abaa97-056b-4cd6-adbb-36b64dcef7cd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2cw8g\" (UID: \"10abaa97-056b-4cd6-adbb-36b64dcef7cd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cw8g" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.245255 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09c095d1-717c-43f6-9022-f46530bac373-config\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.245616 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d965e97-c291-48c5-9be5-188c921a0350-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfqrh\" (UID: \"0d965e97-c291-48c5-9be5-188c921a0350\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.245980 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/005aa4d7-4177-4a67-abeb-ff0c25b0ae9b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j485j\" (UID: \"005aa4d7-4177-4a67-abeb-ff0c25b0ae9b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.246253 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-socket-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.246473 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd5edd56-6cd5-4268-8728-0ba97f2e5cca-trusted-ca\") pod \"ingress-operator-5b745b69d9-wn4qw\" (UID: \"bd5edd56-6cd5-4268-8728-0ba97f2e5cca\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.246642 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-plugins-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.247406 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1466f5c-7d00-415a-9a1a-d2f694a6ac17-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wkhmw\" (UID: \"f1466f5c-7d00-415a-9a1a-d2f694a6ac17\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.247488 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-registration-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.247862 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09c095d1-717c-43f6-9022-f46530bac373-etcd-service-ca\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.248005 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-csi-data-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.248047 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ecaf8d7e-7f08-44c9-b980-db9180876825-mountpoint-dir\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.248316 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c63a391a-52c5-4121-b857-052c0962cf5a-tmpfs\") pod \"packageserver-d55dfcdfc-4pgzf\" (UID: \"c63a391a-52c5-4121-b857-052c0962cf5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.249976 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80a843cd-6141-431e-83c1-a7ce0110e31f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6p4ff\" (UID: \"80a843cd-6141-431e-83c1-a7ce0110e31f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.250632 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09c095d1-717c-43f6-9022-f46530bac373-etcd-ca\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.251245 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09ca800c-f2da-4db9-8570-a3605b84835e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prxwd\" (UID: \"09ca800c-f2da-4db9-8570-a3605b84835e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.251559 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09ca800c-f2da-4db9-8570-a3605b84835e-images\") pod \"machine-config-operator-74547568cd-prxwd\" (UID: \"09ca800c-f2da-4db9-8570-a3605b84835e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.258741 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09c095d1-717c-43f6-9022-f46530bac373-serving-cert\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.258770 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.258876 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78b42959-ba28-4734-b550-04e7d70496b8-cert\") pod \"ingress-canary-zgtq6\" (UID: \"78b42959-ba28-4734-b550-04e7d70496b8\") " pod="openshift-ingress-canary/ingress-canary-zgtq6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.258904 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8d5fa786-7bad-487b-8b04-53bc1849d41a-node-bootstrap-token\") pod \"machine-config-server-t5nkh\" (UID: \"8d5fa786-7bad-487b-8b04-53bc1849d41a\") " pod="openshift-machine-config-operator/machine-config-server-t5nkh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.259171 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/109070e7-9a47-4d07-843f-3dbccb271ecd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-v4b2j\" (UID: \"109070e7-9a47-4d07-843f-3dbccb271ecd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.259187 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d965e97-c291-48c5-9be5-188c921a0350-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfqrh\" (UID: \"0d965e97-c291-48c5-9be5-188c921a0350\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.259387 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b08a2e9-f0f2-4749-9728-941815d60da9-default-certificate\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.259606 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgqk7\" (UniqueName: \"kubernetes.io/projected/231e7091-0809-44e9-9d1a-d5a1ea092a64-kube-api-access-vgqk7\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.259611 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18150739-785b-44d6-8d0b-6f73eb45e9a7-profile-collector-cert\") pod \"catalog-operator-68c6474976-gthl8\" (UID: \"18150739-785b-44d6-8d0b-6f73eb45e9a7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.259634 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.260067 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/73be74ad-f659-4b81-b809-266f951e4994-srv-cert\") pod \"olm-operator-6b444d44fb-ldx2f\" (UID: \"73be74ad-f659-4b81-b809-266f951e4994\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.260120 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af33dff7-bbd3-42d1-9995-c5c008e56e01-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4pxb8\" (UID: \"af33dff7-bbd3-42d1-9995-c5c008e56e01\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4pxb8" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.260148 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5465\" (UniqueName: \"kubernetes.io/projected/8103913c-f8ff-410d-8181-617787247ac0-kube-api-access-s5465\") pod \"machine-approver-56656f9798-gmrg6\" (UID: \"8103913c-f8ff-410d-8181-617787247ac0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.261557 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09ca800c-f2da-4db9-8570-a3605b84835e-proxy-tls\") pod \"machine-config-operator-74547568cd-prxwd\" (UID: \"09ca800c-f2da-4db9-8570-a3605b84835e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.262350 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-bound-sa-token\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.264502 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rwp6\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-kube-api-access-7rwp6\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.265830 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/73be74ad-f659-4b81-b809-266f951e4994-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ldx2f\" (UID: \"73be74ad-f659-4b81-b809-266f951e4994\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.265930 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c63a391a-52c5-4121-b857-052c0962cf5a-apiservice-cert\") pod \"packageserver-d55dfcdfc-4pgzf\" (UID: \"c63a391a-52c5-4121-b857-052c0962cf5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.266286 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kzncc\" (UID: \"46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.266720 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8d5fa786-7bad-487b-8b04-53bc1849d41a-certs\") pod \"machine-config-server-t5nkh\" (UID: \"8d5fa786-7bad-487b-8b04-53bc1849d41a\") " pod="openshift-machine-config-operator/machine-config-server-t5nkh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.267199 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1466f5c-7d00-415a-9a1a-d2f694a6ac17-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wkhmw\" (UID: \"f1466f5c-7d00-415a-9a1a-d2f694a6ac17\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.267600 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aba788cd-c369-417f-a2b5-fb92019fc864-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8f8nq\" (UID: \"aba788cd-c369-417f-a2b5-fb92019fc864\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.267702 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09c095d1-717c-43f6-9022-f46530bac373-etcd-client\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.269753 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0ebab130-4c94-441a-90a2-a20310673821-signing-key\") pod \"service-ca-9c57cc56f-dqkhr\" (UID: \"0ebab130-4c94-441a-90a2-a20310673821\") " pod="openshift-service-ca/service-ca-9c57cc56f-dqkhr" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.270176 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c63a391a-52c5-4121-b857-052c0962cf5a-webhook-cert\") pod \"packageserver-d55dfcdfc-4pgzf\" (UID: \"c63a391a-52c5-4121-b857-052c0962cf5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.276987 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0ebab130-4c94-441a-90a2-a20310673821-signing-cabundle\") pod \"service-ca-9c57cc56f-dqkhr\" (UID: \"0ebab130-4c94-441a-90a2-a20310673821\") " pod="openshift-service-ca/service-ca-9c57cc56f-dqkhr" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.278283 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13151297-dd89-4f46-8614-04670773ad2b-config-volume\") pod \"dns-default-9jlxs\" (UID: \"13151297-dd89-4f46-8614-04670773ad2b\") " pod="openshift-dns/dns-default-9jlxs" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.281359 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13151297-dd89-4f46-8614-04670773ad2b-metrics-tls\") pod \"dns-default-9jlxs\" (UID: \"13151297-dd89-4f46-8614-04670773ad2b\") " pod="openshift-dns/dns-default-9jlxs" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.289343 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7jhtl\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.292221 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw2jn\" (UniqueName: \"kubernetes.io/projected/27832198-1ba5-4c93-b41a-58a17dc734dd-kube-api-access-rw2jn\") pod \"authentication-operator-69f744f599-l4dfg\" (UID: \"27832198-1ba5-4c93-b41a-58a17dc734dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.302919 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs"] Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.307553 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2dsw"] Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.309599 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.320143 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vk6x4"] Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.320449 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr8hx\" (UniqueName: \"kubernetes.io/projected/7b08a2e9-f0f2-4749-9728-941815d60da9-kube-api-access-zr8hx\") pod \"router-default-5444994796-b2t9m\" (UID: \"7b08a2e9-f0f2-4749-9728-941815d60da9\") " pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.329041 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:56 crc kubenswrapper[5024]: E1128 17:00:56.330003 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:56.829978445 +0000 UTC m=+158.878899350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.345806 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jvvpl"] Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.347232 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kh9q\" (UniqueName: \"kubernetes.io/projected/ab8f76d6-5ca4-4197-b6df-87fe4d019383-kube-api-access-5kh9q\") pod \"console-operator-58897d9998-vj7pt\" (UID: \"ab8f76d6-5ca4-4197-b6df-87fe4d019383\") " pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.351172 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/109070e7-9a47-4d07-843f-3dbccb271ecd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-v4b2j\" (UID: \"109070e7-9a47-4d07-843f-3dbccb271ecd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.365287 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8f36997-26e7-43a4-9507-afe1d393ee29-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6xbnl\" (UID: \"c8f36997-26e7-43a4-9507-afe1d393ee29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.376553 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-r7n7g"] Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.385970 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61ee1d79-90be-4c28-b765-806f010f4665-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c7d2s\" (UID: \"61ee1d79-90be-4c28-b765-806f010f4665\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.386626 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.387272 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m"] Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.387573 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.401375 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnbxt\" (UniqueName: \"kubernetes.io/projected/28f5e1d8-5fbc-4328-8783-78c3a2d2e53b-kube-api-access-qnbxt\") pod \"service-ca-operator-777779d784-xdqw9\" (UID: \"28f5e1d8-5fbc-4328-8783-78c3a2d2e53b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.404153 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.411832 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.417860 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.430330 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.430942 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: E1128 17:00:56.431350 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:56.93133294 +0000 UTC m=+158.980253845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.441935 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.458281 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.462072 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd5edd56-6cd5-4268-8728-0ba97f2e5cca-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wn4qw\" (UID: \"bd5edd56-6cd5-4268-8728-0ba97f2e5cca\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.464990 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.472396 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.482507 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcv79\" (UniqueName: \"kubernetes.io/projected/bd5edd56-6cd5-4268-8728-0ba97f2e5cca-kube-api-access-hcv79\") pod \"ingress-operator-5b745b69d9-wn4qw\" (UID: \"bd5edd56-6cd5-4268-8728-0ba97f2e5cca\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.500529 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7kdr\" (UniqueName: \"kubernetes.io/projected/13151297-dd89-4f46-8614-04670773ad2b-kube-api-access-c7kdr\") pod \"dns-default-9jlxs\" (UID: \"13151297-dd89-4f46-8614-04670773ad2b\") " pod="openshift-dns/dns-default-9jlxs" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.532657 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:56 crc kubenswrapper[5024]: E1128 17:00:56.533169 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.033107276 +0000 UTC m=+159.082028191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.541969 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8"] Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.544923 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.545167 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxs4s\" (UniqueName: \"kubernetes.io/projected/8d5fa786-7bad-487b-8b04-53bc1849d41a-kube-api-access-nxs4s\") pod \"machine-config-server-t5nkh\" (UID: \"8d5fa786-7bad-487b-8b04-53bc1849d41a\") " pod="openshift-machine-config-operator/machine-config-server-t5nkh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.563149 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6t2c\" (UniqueName: \"kubernetes.io/projected/78b42959-ba28-4734-b550-04e7d70496b8-kube-api-access-h6t2c\") pod \"ingress-canary-zgtq6\" (UID: \"78b42959-ba28-4734-b550-04e7d70496b8\") " pod="openshift-ingress-canary/ingress-canary-zgtq6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.583672 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjnj9\" (UniqueName: \"kubernetes.io/projected/f1466f5c-7d00-415a-9a1a-d2f694a6ac17-kube-api-access-tjnj9\") pod \"openshift-controller-manager-operator-756b6f6bc6-wkhmw\" (UID: \"f1466f5c-7d00-415a-9a1a-d2f694a6ac17\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.600405 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv2g8\" (UniqueName: \"kubernetes.io/projected/0d965e97-c291-48c5-9be5-188c921a0350-kube-api-access-fv2g8\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfqrh\" (UID: \"0d965e97-c291-48c5-9be5-188c921a0350\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.622630 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9xc5\" (UniqueName: \"kubernetes.io/projected/80a843cd-6141-431e-83c1-a7ce0110e31f-kube-api-access-v9xc5\") pod \"marketplace-operator-79b997595-6p4ff\" (UID: \"80a843cd-6141-431e-83c1-a7ce0110e31f\") " pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.624183 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdv6n\" (UniqueName: \"kubernetes.io/projected/eb689c5a-3342-4dd0-ba63-30477d447ac4-kube-api-access-gdv6n\") pod \"migrator-59844c95c7-x56ns\" (UID: \"eb689c5a-3342-4dd0-ba63-30477d447ac4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x56ns" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.625910 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btnhg\" (UniqueName: \"kubernetes.io/projected/0ebab130-4c94-441a-90a2-a20310673821-kube-api-access-btnhg\") pod \"service-ca-9c57cc56f-dqkhr\" (UID: \"0ebab130-4c94-441a-90a2-a20310673821\") " pod="openshift-service-ca/service-ca-9c57cc56f-dqkhr" Nov 28 17:00:56 crc kubenswrapper[5024]: W1128 17:00:56.631998 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb6a1824_13a4_427f_b277_c41045a8ad45.slice/crio-4467da21c992091f190a16170042ab1a0b0875812d7ac9fe35bf2298dadf8190 WatchSource:0}: Error finding container 4467da21c992091f190a16170042ab1a0b0875812d7ac9fe35bf2298dadf8190: Status 404 returned error can't find the container with id 4467da21c992091f190a16170042ab1a0b0875812d7ac9fe35bf2298dadf8190 Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.635162 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: E1128 17:00:56.635604 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.135579621 +0000 UTC m=+159.184500526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:56 crc kubenswrapper[5024]: W1128 17:00:56.636231 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf84f4343_2000_4b50_9650_22953ca7d39d.slice/crio-13c5d1c28c1b581cee4ad83a822bc148d031c8d47edb71640e191476415de622 WatchSource:0}: Error finding container 13c5d1c28c1b581cee4ad83a822bc148d031c8d47edb71640e191476415de622: Status 404 returned error can't find the container with id 13c5d1c28c1b581cee4ad83a822bc148d031c8d47edb71640e191476415de622 Nov 28 17:00:56 crc kubenswrapper[5024]: W1128 17:00:56.636939 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96e29661_be19_4efb_8337_661e5af2c4a2.slice/crio-e2a3826430941c49f4a1ee2f5bf8ccf41e0a3a5920f63a1ffb3cac71e0a72c96 WatchSource:0}: Error finding container e2a3826430941c49f4a1ee2f5bf8ccf41e0a3a5920f63a1ffb3cac71e0a72c96: Status 404 returned error can't find the container with id e2a3826430941c49f4a1ee2f5bf8ccf41e0a3a5920f63a1ffb3cac71e0a72c96 Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.637799 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" event={"ID":"ed40ac73-afc2-4dae-9364-e6775923e031","Type":"ContainerStarted","Data":"dcbedf3ec72a43834104d61d3cc43fb7081ee28f53517b76d209a7145119077f"} Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.641189 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" event={"ID":"8103913c-f8ff-410d-8181-617787247ac0","Type":"ContainerStarted","Data":"e0f977c2fce55d57779e73ce6d10676ecb5e0806b72602bca68d8dc4da3aabc3"} Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.645352 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" event={"ID":"c1be805d-70ab-4dfa-aa6f-23b846d64124","Type":"ContainerStarted","Data":"14e7d91fa9b7ea0da7a08cdf2c29a9d51f99127c0e88e15bd35132639b6800f2"} Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.645573 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c44k\" (UniqueName: \"kubernetes.io/projected/09c095d1-717c-43f6-9022-f46530bac373-kube-api-access-9c44k\") pod \"etcd-operator-b45778765-kkcnh\" (UID: \"09c095d1-717c-43f6-9022-f46530bac373\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.646825 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs" event={"ID":"a038e211-ffae-4e8b-9abf-8b32153b2c6d","Type":"ContainerStarted","Data":"8cebd126f9e2dc969b0bf00ab3bc1ba485a99bfb5db2811a3ef9c00e0bb924b5"} Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.647923 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" event={"ID":"d4cd69fe-add0-427e-a129-cfb9cecb6887","Type":"ContainerStarted","Data":"a91fb40398c7bb7a1428b49790f94bd0384112309052091ab6c5b908aa35e54b"} Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.662893 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55jqj\" (UniqueName: \"kubernetes.io/projected/005aa4d7-4177-4a67-abeb-ff0c25b0ae9b-kube-api-access-55jqj\") pod \"machine-config-controller-84d6567774-j485j\" (UID: \"005aa4d7-4177-4a67-abeb-ff0c25b0ae9b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.666478 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9jlxs" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.679012 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zgtq6" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.683922 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-t5nkh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.686640 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwzh4\" (UniqueName: \"kubernetes.io/projected/09ca800c-f2da-4db9-8570-a3605b84835e-kube-api-access-dwzh4\") pod \"machine-config-operator-74547568cd-prxwd\" (UID: \"09ca800c-f2da-4db9-8570-a3605b84835e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.696922 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.704208 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aba788cd-c369-417f-a2b5-fb92019fc864-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8f8nq\" (UID: \"aba788cd-c369-417f-a2b5-fb92019fc864\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.721165 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzpc9\" (UniqueName: \"kubernetes.io/projected/ecaf8d7e-7f08-44c9-b980-db9180876825-kube-api-access-pzpc9\") pod \"csi-hostpathplugin-msz56\" (UID: \"ecaf8d7e-7f08-44c9-b980-db9180876825\") " pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.736163 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:56 crc kubenswrapper[5024]: E1128 17:00:56.736741 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.236720698 +0000 UTC m=+159.285641603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.747120 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7n8k\" (UniqueName: \"kubernetes.io/projected/10abaa97-056b-4cd6-adbb-36b64dcef7cd-kube-api-access-s7n8k\") pod \"control-plane-machine-set-operator-78cbb6b69f-2cw8g\" (UID: \"10abaa97-056b-4cd6-adbb-36b64dcef7cd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cw8g" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.768324 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46lrf\" (UniqueName: \"kubernetes.io/projected/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-kube-api-access-46lrf\") pod \"collect-profiles-29405820-dgz4c\" (UID: \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.788817 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.789389 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.793792 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nk56\" (UniqueName: \"kubernetes.io/projected/73be74ad-f659-4b81-b809-266f951e4994-kube-api-access-5nk56\") pod \"olm-operator-6b444d44fb-ldx2f\" (UID: \"73be74ad-f659-4b81-b809-266f951e4994\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.811905 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fphf6\" (UniqueName: \"kubernetes.io/projected/af33dff7-bbd3-42d1-9995-c5c008e56e01-kube-api-access-fphf6\") pod \"multus-admission-controller-857f4d67dd-4pxb8\" (UID: \"af33dff7-bbd3-42d1-9995-c5c008e56e01\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4pxb8" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.825888 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4pt4\" (UniqueName: \"kubernetes.io/projected/46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1-kube-api-access-m4pt4\") pod \"package-server-manager-789f6589d5-kzncc\" (UID: \"46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.833562 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.837879 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:56 crc kubenswrapper[5024]: E1128 17:00:56.838362 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.33834373 +0000 UTC m=+159.387264635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.845059 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfdvz\" (UniqueName: \"kubernetes.io/projected/c63a391a-52c5-4121-b857-052c0962cf5a-kube-api-access-mfdvz\") pod \"packageserver-d55dfcdfc-4pgzf\" (UID: \"c63a391a-52c5-4121-b857-052c0962cf5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.856657 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.866393 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x56ns" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.868314 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dqkhr" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.898474 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cw8g" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.898697 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.899519 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6874\" (UniqueName: \"kubernetes.io/projected/a471ed62-1700-448f-a592-568efaafca96-kube-api-access-v6874\") pod \"dns-operator-744455d44c-4bww5\" (UID: \"a471ed62-1700-448f-a592-568efaafca96\") " pod="openshift-dns-operator/dns-operator-744455d44c-4bww5" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.904648 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnd2b\" (UniqueName: \"kubernetes.io/projected/18150739-785b-44d6-8d0b-6f73eb45e9a7-kube-api-access-vnd2b\") pod \"catalog-operator-68c6474976-gthl8\" (UID: \"18150739-785b-44d6-8d0b-6f73eb45e9a7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.905625 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.915856 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.925692 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.932756 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.938979 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:56 crc kubenswrapper[5024]: E1128 17:00:56.939545 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.439519709 +0000 UTC m=+159.488440604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:56 crc kubenswrapper[5024]: I1128 17:00:56.958873 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-msz56" Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.050204 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:57 crc kubenswrapper[5024]: E1128 17:00:57.050615 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.55060081 +0000 UTC m=+159.599521715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.098583 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.109473 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4pxb8" Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.119356 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4bww5" Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.151633 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:57 crc kubenswrapper[5024]: E1128 17:00:57.152080 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.652056087 +0000 UTC m=+159.700976992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.176822 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9"] Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.176892 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg"] Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.196630 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.231388 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l4dfg"] Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.253989 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:57 crc kubenswrapper[5024]: E1128 17:00:57.254745 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.754729588 +0000 UTC m=+159.803650493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:57 crc kubenswrapper[5024]: W1128 17:00:57.335070 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28f5e1d8_5fbc_4328_8783_78c3a2d2e53b.slice/crio-f06a356fc0a81d8ef7453bb79474fa38d15d8c85c9545a17728a30f9716e19a0 WatchSource:0}: Error finding container f06a356fc0a81d8ef7453bb79474fa38d15d8c85c9545a17728a30f9716e19a0: Status 404 returned error can't find the container with id f06a356fc0a81d8ef7453bb79474fa38d15d8c85c9545a17728a30f9716e19a0 Nov 28 17:00:57 crc kubenswrapper[5024]: W1128 17:00:57.350362 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac1db444_6f12_4ac1_943f_b56efdbbb206.slice/crio-14eb5f711c3596f9f888f0e9f57a69403a3fa16e39f05c8f63859b603b5f3efd WatchSource:0}: Error finding container 14eb5f711c3596f9f888f0e9f57a69403a3fa16e39f05c8f63859b603b5f3efd: Status 404 returned error can't find the container with id 14eb5f711c3596f9f888f0e9f57a69403a3fa16e39f05c8f63859b603b5f3efd Nov 28 17:00:57 crc kubenswrapper[5024]: W1128 17:00:57.355191 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27832198_1ba5_4c93_b41a_58a17dc734dd.slice/crio-63675a9a2ff658965337b48d74c420419c145f7c1660e5d5edcfe663945f84cc WatchSource:0}: Error finding container 63675a9a2ff658965337b48d74c420419c145f7c1660e5d5edcfe663945f84cc: Status 404 returned error can't find the container with id 63675a9a2ff658965337b48d74c420419c145f7c1660e5d5edcfe663945f84cc Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.358007 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:57 crc kubenswrapper[5024]: E1128 17:00:57.358190 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.858147931 +0000 UTC m=+159.907068836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.358316 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:57 crc kubenswrapper[5024]: E1128 17:00:57.358924 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.858910113 +0000 UTC m=+159.907831018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.463640 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:57 crc kubenswrapper[5024]: E1128 17:00:57.463888 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.963844158 +0000 UTC m=+160.012765093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.464606 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:57 crc kubenswrapper[5024]: E1128 17:00:57.465129 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.965103274 +0000 UTC m=+160.014024179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.565916 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:57 crc kubenswrapper[5024]: E1128 17:00:57.567627 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:58.06758295 +0000 UTC m=+160.116503865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.582193 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:57 crc kubenswrapper[5024]: E1128 17:00:57.582621 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:58.082602597 +0000 UTC m=+160.131523502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.683157 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:57 crc kubenswrapper[5024]: E1128 17:00:57.683503 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:58.183462247 +0000 UTC m=+160.232383152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.688934 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9" event={"ID":"28f5e1d8-5fbc-4328-8783-78c3a2d2e53b","Type":"ContainerStarted","Data":"f06a356fc0a81d8ef7453bb79474fa38d15d8c85c9545a17728a30f9716e19a0"} Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.724344 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" event={"ID":"27832198-1ba5-4c93-b41a-58a17dc734dd","Type":"ContainerStarted","Data":"63675a9a2ff658965337b48d74c420419c145f7c1660e5d5edcfe663945f84cc"} Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.754805 5024 generic.go:334] "Generic (PLEG): container finished" podID="ed40ac73-afc2-4dae-9364-e6775923e031" containerID="36ebce62f4e4dc2d84c54b33f04b47d3418cf90adedebc137cbc9744558884c2" exitCode=0 Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.754911 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" event={"ID":"ed40ac73-afc2-4dae-9364-e6775923e031","Type":"ContainerDied","Data":"36ebce62f4e4dc2d84c54b33f04b47d3418cf90adedebc137cbc9744558884c2"} Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.771650 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" event={"ID":"a809b012-e8e1-4061-8fcf-7c9083e5569d","Type":"ContainerStarted","Data":"bcd14eab324d79d854a5419b72c7dcb0716937db0f6ce3d95c0eec350c9eab6c"} Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.772761 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw"] Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.788781 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:57 crc kubenswrapper[5024]: E1128 17:00:57.789190 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:58.289175725 +0000 UTC m=+160.338096630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.798405 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs" event={"ID":"a038e211-ffae-4e8b-9abf-8b32153b2c6d","Type":"ContainerStarted","Data":"781990397bc5f6d748c7157505957ee9aab8956fa08ca13b169e38a830ff1a95"} Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.805822 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r7n7g" event={"ID":"f84f4343-2000-4b50-9650-22953ca7d39d","Type":"ContainerStarted","Data":"13c5d1c28c1b581cee4ad83a822bc148d031c8d47edb71640e191476415de622"} Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.809162 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jvvpl" event={"ID":"fb6a1824-13a4-427f-b277-c41045a8ad45","Type":"ContainerStarted","Data":"4467da21c992091f190a16170042ab1a0b0875812d7ac9fe35bf2298dadf8190"} Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.813351 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" event={"ID":"96e29661-be19-4efb-8337-661e5af2c4a2","Type":"ContainerStarted","Data":"e2a3826430941c49f4a1ee2f5bf8ccf41e0a3a5920f63a1ffb3cac71e0a72c96"} Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.828996 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-t5nkh" event={"ID":"8d5fa786-7bad-487b-8b04-53bc1849d41a","Type":"ContainerStarted","Data":"01fee4996e1310d2d0bf70783803b3bbf3e5f4b4d52a25a5f7e2bd6c12ae91dc"} Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.830605 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" event={"ID":"d4cd69fe-add0-427e-a129-cfb9cecb6887","Type":"ContainerStarted","Data":"c5cb7145df6d24810264d348e22eeb89b104a2f7a990c2a2a575aee331d9842b"} Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.830992 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.831868 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" event={"ID":"ac1db444-6f12-4ac1-943f-b56efdbbb206","Type":"ContainerStarted","Data":"14eb5f711c3596f9f888f0e9f57a69403a3fa16e39f05c8f63859b603b5f3efd"} Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.832560 5024 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-v2dsw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.832597 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" podUID="d4cd69fe-add0-427e-a129-cfb9cecb6887" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.835415 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-b2t9m" event={"ID":"7b08a2e9-f0f2-4749-9728-941815d60da9","Type":"ContainerStarted","Data":"bd57cfc1a304616bd2ff0b7fa82a7f8505ab41366f85b1b203330863384fd7e1"} Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.902303 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:57 crc kubenswrapper[5024]: E1128 17:00:57.903456 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:58.403435297 +0000 UTC m=+160.452356202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:57 crc kubenswrapper[5024]: I1128 17:00:57.949935 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jhtl"] Nov 28 17:00:57 crc kubenswrapper[5024]: W1128 17:00:57.972820 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd5edd56_6cd5_4268_8728_0ba97f2e5cca.slice/crio-eb849acab2721ccae9f677aad0fb9291ef988be61252ba6e3a07413c2531c172 WatchSource:0}: Error finding container eb849acab2721ccae9f677aad0fb9291ef988be61252ba6e3a07413c2531c172: Status 404 returned error can't find the container with id eb849acab2721ccae9f677aad0fb9291ef988be61252ba6e3a07413c2531c172 Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.006297 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:58 crc kubenswrapper[5024]: E1128 17:00:58.007188 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:58.507165258 +0000 UTC m=+160.556086223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.013790 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vj7pt"] Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.016989 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n"] Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.066226 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl"] Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.071123 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s"] Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.089351 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j"] Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.109127 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:58 crc kubenswrapper[5024]: E1128 17:00:58.109612 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:58.609593193 +0000 UTC m=+160.658514098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.174423 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw"] Nov 28 17:00:58 crc kubenswrapper[5024]: W1128 17:00:58.177809 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61ee1d79_90be_4c28_b765_806f010f4665.slice/crio-47af2db052fc08e44b1d2314984a993e2aaea86b5fdcd201214fabe78d72d68c WatchSource:0}: Error finding container 47af2db052fc08e44b1d2314984a993e2aaea86b5fdcd201214fabe78d72d68c: Status 404 returned error can't find the container with id 47af2db052fc08e44b1d2314984a993e2aaea86b5fdcd201214fabe78d72d68c Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.212244 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:58 crc kubenswrapper[5024]: E1128 17:00:58.212709 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:58.712686766 +0000 UTC m=+160.761607671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.233779 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9jlxs"] Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.316324 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:58 crc kubenswrapper[5024]: E1128 17:00:58.316902 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:58.816879201 +0000 UTC m=+160.865800106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:58 crc kubenswrapper[5024]: W1128 17:00:58.319499 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1466f5c_7d00_415a_9a1a_d2f694a6ac17.slice/crio-e1e7ff153ab1781f57f7c8d6bf1f8bf9c901612b5decac54b20630329501ce6c WatchSource:0}: Error finding container e1e7ff153ab1781f57f7c8d6bf1f8bf9c901612b5decac54b20630329501ce6c: Status 404 returned error can't find the container with id e1e7ff153ab1781f57f7c8d6bf1f8bf9c901612b5decac54b20630329501ce6c Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.419795 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:58 crc kubenswrapper[5024]: E1128 17:00:58.420237 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:58.920222822 +0000 UTC m=+160.969143727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.452965 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" podStartSLOduration=139.452937903 podStartE2EDuration="2m19.452937903s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:58.445971544 +0000 UTC m=+160.494892469" watchObservedRunningTime="2025-11-28 17:00:58.452937903 +0000 UTC m=+160.501858808" Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.520481 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:58 crc kubenswrapper[5024]: E1128 17:00:58.520737 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:59.020707931 +0000 UTC m=+161.069628836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.520996 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:58 crc kubenswrapper[5024]: E1128 17:00:58.521325 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:59.021319168 +0000 UTC m=+161.070240073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:58 crc kubenswrapper[5024]: W1128 17:00:58.539601 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13151297_dd89_4f46_8614_04670773ad2b.slice/crio-70f70843884ce97077c9260e1771ae980a660aa65cc79874bf5012da3e0c1021 WatchSource:0}: Error finding container 70f70843884ce97077c9260e1771ae980a660aa65cc79874bf5012da3e0c1021: Status 404 returned error can't find the container with id 70f70843884ce97077c9260e1771ae980a660aa65cc79874bf5012da3e0c1021 Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.554177 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-frbqs" podStartSLOduration=140.554151073 podStartE2EDuration="2m20.554151073s" podCreationTimestamp="2025-11-28 16:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:58.552757103 +0000 UTC m=+160.601678008" watchObservedRunningTime="2025-11-28 17:00:58.554151073 +0000 UTC m=+160.603071978" Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.622593 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:58 crc kubenswrapper[5024]: E1128 17:00:58.622745 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:59.122721764 +0000 UTC m=+161.171642659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.622934 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:58 crc kubenswrapper[5024]: E1128 17:00:58.623264 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:59.123256179 +0000 UTC m=+161.172177084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.725378 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:58 crc kubenswrapper[5024]: E1128 17:00:58.725581 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:59.22554502 +0000 UTC m=+161.274465925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.728164 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:58 crc kubenswrapper[5024]: E1128 17:00:58.728733 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:59.22871739 +0000 UTC m=+161.277638295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.825696 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c"] Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.829567 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:58 crc kubenswrapper[5024]: E1128 17:00:58.829652 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:59.329632451 +0000 UTC m=+161.378553356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.832810 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:58 crc kubenswrapper[5024]: E1128 17:00:58.833279 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:59.333255365 +0000 UTC m=+161.382176260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.833703 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cw8g"] Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.840030 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq"] Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.941112 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:58 crc kubenswrapper[5024]: E1128 17:00:58.946285 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:59.446210859 +0000 UTC m=+161.495131764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.965311 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" event={"ID":"8103913c-f8ff-410d-8181-617787247ac0","Type":"ContainerStarted","Data":"538c539874adbd4802af52deda7d73b27904ad21ee4c7f88b244b59c4c922b32"} Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.971797 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" event={"ID":"bd5edd56-6cd5-4268-8728-0ba97f2e5cca","Type":"ContainerStarted","Data":"eb849acab2721ccae9f677aad0fb9291ef988be61252ba6e3a07413c2531c172"} Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.980245 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl" event={"ID":"c8f36997-26e7-43a4-9507-afe1d393ee29","Type":"ContainerStarted","Data":"7e3adb71f37f1f61ba623b27a4253173d7b0d7c9a68489f0a6981e46d7cc0875"} Nov 28 17:00:58 crc kubenswrapper[5024]: W1128 17:00:58.984159 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaba788cd_c369_417f_a2b5_fb92019fc864.slice/crio-d89436b138d1b346ab3ee9959a218b617d7153a02d900a47a81f0c4b58fff530 WatchSource:0}: Error finding container d89436b138d1b346ab3ee9959a218b617d7153a02d900a47a81f0c4b58fff530: Status 404 returned error can't find the container with id d89436b138d1b346ab3ee9959a218b617d7153a02d900a47a81f0c4b58fff530 Nov 28 17:00:58 crc kubenswrapper[5024]: W1128 17:00:58.985918 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbbc3e77_bdd7_4ca2_bbed_bcf4d118385a.slice/crio-7442ee939586d779a31f3c6be3650dfdcd22531483388d7e18b91486f1a17fca WatchSource:0}: Error finding container 7442ee939586d779a31f3c6be3650dfdcd22531483388d7e18b91486f1a17fca: Status 404 returned error can't find the container with id 7442ee939586d779a31f3c6be3650dfdcd22531483388d7e18b91486f1a17fca Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.986078 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9jlxs" event={"ID":"13151297-dd89-4f46-8614-04670773ad2b","Type":"ContainerStarted","Data":"70f70843884ce97077c9260e1771ae980a660aa65cc79874bf5012da3e0c1021"} Nov 28 17:00:58 crc kubenswrapper[5024]: W1128 17:00:58.988787 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10abaa97_056b_4cd6_adbb_36b64dcef7cd.slice/crio-3a3b36ccda5c457bf05a375ff40292f40e88b4ded2cf5c478972157a43922185 WatchSource:0}: Error finding container 3a3b36ccda5c457bf05a375ff40292f40e88b4ded2cf5c478972157a43922185: Status 404 returned error can't find the container with id 3a3b36ccda5c457bf05a375ff40292f40e88b4ded2cf5c478972157a43922185 Nov 28 17:00:58 crc kubenswrapper[5024]: I1128 17:00:58.991761 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r7n7g" event={"ID":"f84f4343-2000-4b50-9650-22953ca7d39d","Type":"ContainerStarted","Data":"aac6675b09e1b4304dbe8a88e039d6ac71a2dfcb278e02f73847b3eb433f567b"} Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.025647 5024 generic.go:334] "Generic (PLEG): container finished" podID="a809b012-e8e1-4061-8fcf-7c9083e5569d" containerID="3858b76a5d5639eb85b9f3f8536fcc0404738142df874f0be091d68a61147b85" exitCode=0 Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.025748 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" event={"ID":"a809b012-e8e1-4061-8fcf-7c9083e5569d","Type":"ContainerDied","Data":"3858b76a5d5639eb85b9f3f8536fcc0404738142df874f0be091d68a61147b85"} Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.028968 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" event={"ID":"ac1db444-6f12-4ac1-943f-b56efdbbb206","Type":"ContainerStarted","Data":"f1f323a4020ecb1b2b71d18eacaf442684a86455fc5f0c3f8fa29bc8226ea178"} Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.036057 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j" event={"ID":"109070e7-9a47-4d07-843f-3dbccb271ecd","Type":"ContainerStarted","Data":"788121866c43202748b6282ef975007b4703087d47622ff460028e5cbef23948"} Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.042900 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:59 crc kubenswrapper[5024]: E1128 17:00:59.045010 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:59.544982619 +0000 UTC m=+161.593903524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.056496 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vj7pt" event={"ID":"ab8f76d6-5ca4-4197-b6df-87fe4d019383","Type":"ContainerStarted","Data":"3f393a697388da33ec09c0833da8d7ba7e3a4b0c7e8dae6fd57b13cff8bf7ff5"} Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.081524 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j485j"] Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.098773 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd"] Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.100610 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zgtq6"] Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.102697 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x56ns"] Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.105175 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" podStartSLOduration=139.105152061 podStartE2EDuration="2m19.105152061s" podCreationTimestamp="2025-11-28 16:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:59.09035828 +0000 UTC m=+161.139279185" watchObservedRunningTime="2025-11-28 17:00:59.105152061 +0000 UTC m=+161.154072966" Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.109571 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jvvpl" event={"ID":"fb6a1824-13a4-427f-b277-c41045a8ad45","Type":"ContainerStarted","Data":"cce0a63c9579734d99bd07bb10df9fdd41f4c8591a49ef30b5323ae311947484"} Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.110441 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jvvpl" Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.120204 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw" event={"ID":"f1466f5c-7d00-415a-9a1a-d2f694a6ac17","Type":"ContainerStarted","Data":"e1e7ff153ab1781f57f7c8d6bf1f8bf9c901612b5decac54b20630329501ce6c"} Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.120389 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-r7n7g" podStartSLOduration=140.120366934 podStartE2EDuration="2m20.120366934s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:59.118531782 +0000 UTC m=+161.167452707" watchObservedRunningTime="2025-11-28 17:00:59.120366934 +0000 UTC m=+161.169287839" Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.130853 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.130923 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.144808 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:59 crc kubenswrapper[5024]: E1128 17:00:59.145374 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:59.645343385 +0000 UTC m=+161.694264320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.145967 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9" event={"ID":"28f5e1d8-5fbc-4328-8783-78c3a2d2e53b","Type":"ContainerStarted","Data":"9d8124111466961e1995f04d894494ae21295be0b333df96486742aac4b715e9"} Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.163403 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" event={"ID":"27832198-1ba5-4c93-b41a-58a17dc734dd","Type":"ContainerStarted","Data":"a65e4f60ab49e80152c2e098753dcc30510f1ff156a7685dbeed7bcc61eb1a36"} Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.176929 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jvvpl" podStartSLOduration=140.176901493 podStartE2EDuration="2m20.176901493s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:59.158721226 +0000 UTC m=+161.207642131" watchObservedRunningTime="2025-11-28 17:00:59.176901493 +0000 UTC m=+161.225822408" Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.177139 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" event={"ID":"c1be805d-70ab-4dfa-aa6f-23b846d64124","Type":"ContainerStarted","Data":"a9d7d7d21cf97cac456ca6703b63053b7d5bc59c264f27aae5b6ac87282f7f46"} Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.179733 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-b2t9m" event={"ID":"7b08a2e9-f0f2-4749-9728-941815d60da9","Type":"ContainerStarted","Data":"00c1ebfc6f021e43f6820cf33e36779dd2fa3dde1337e41ea419db478ecdd75a"} Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.190443 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xdqw9" podStartSLOduration=139.190418018 podStartE2EDuration="2m19.190418018s" podCreationTimestamp="2025-11-28 16:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:59.187817784 +0000 UTC m=+161.236738689" watchObservedRunningTime="2025-11-28 17:00:59.190418018 +0000 UTC m=+161.239338923" Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.247982 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:59 crc kubenswrapper[5024]: E1128 17:00:59.255552 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:59.75553013 +0000 UTC m=+161.804451035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.299983 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-b2t9m" podStartSLOduration=140.299943904 podStartE2EDuration="2m20.299943904s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:59.267684636 +0000 UTC m=+161.316605541" watchObservedRunningTime="2025-11-28 17:00:59.299943904 +0000 UTC m=+161.348864809" Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.301878 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc"] Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.305071 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4dfg" podStartSLOduration=141.3050498 podStartE2EDuration="2m21.3050498s" podCreationTimestamp="2025-11-28 16:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:59.303544087 +0000 UTC m=+161.352464992" watchObservedRunningTime="2025-11-28 17:00:59.3050498 +0000 UTC m=+161.353970705" Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.323180 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" event={"ID":"61ee1d79-90be-4c28-b765-806f010f4665","Type":"ContainerStarted","Data":"47af2db052fc08e44b1d2314984a993e2aaea86b5fdcd201214fabe78d72d68c"} Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.325820 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh"] Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.333011 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-msz56"] Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.336744 5024 generic.go:334] "Generic (PLEG): container finished" podID="96e29661-be19-4efb-8337-661e5af2c4a2" containerID="9835064a837e986de68b6bdc071fafdaf428ca9375ad60708edd695aab2d9038" exitCode=0 Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.336882 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" event={"ID":"96e29661-be19-4efb-8337-661e5af2c4a2","Type":"ContainerDied","Data":"9835064a837e986de68b6bdc071fafdaf428ca9375ad60708edd695aab2d9038"} Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.342132 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" event={"ID":"231e7091-0809-44e9-9d1a-d5a1ea092a64","Type":"ContainerStarted","Data":"a5f598a84dabb88a64291037cf58bfb0ae88070661c646835d6c37f806d6f655"} Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.342198 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf"] Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.364200 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.374833 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f"] Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.376895 5024 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-v2dsw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.376936 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" podUID="d4cd69fe-add0-427e-a129-cfb9cecb6887" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.381614 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6p4ff"] Nov 28 17:00:59 crc kubenswrapper[5024]: E1128 17:00:59.386610 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:59.88658049 +0000 UTC m=+161.935501395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.390364 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dqkhr"] Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.396978 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8"] Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.418994 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.420305 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.420370 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.433837 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4pxb8"] Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.434492 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kkcnh"] Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.466441 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:59 crc kubenswrapper[5024]: E1128 17:00:59.467441 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:00:59.96742532 +0000 UTC m=+162.016346225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:59 crc kubenswrapper[5024]: W1128 17:00:59.478958 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73be74ad_f659_4b81_b809_266f951e4994.slice/crio-e970106048f55fe3bdc650da18515f9f64b3a4083b494bedab92473ebe10cb81 WatchSource:0}: Error finding container e970106048f55fe3bdc650da18515f9f64b3a4083b494bedab92473ebe10cb81: Status 404 returned error can't find the container with id e970106048f55fe3bdc650da18515f9f64b3a4083b494bedab92473ebe10cb81 Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.483905 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4bww5"] Nov 28 17:00:59 crc kubenswrapper[5024]: W1128 17:00:59.507329 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d965e97_c291_48c5_9be5_188c921a0350.slice/crio-2716b1afd5905af76a2821e2f1c4572d4c95af0269ed7a8811902ad64fb69336 WatchSource:0}: Error finding container 2716b1afd5905af76a2821e2f1c4572d4c95af0269ed7a8811902ad64fb69336: Status 404 returned error can't find the container with id 2716b1afd5905af76a2821e2f1c4572d4c95af0269ed7a8811902ad64fb69336 Nov 28 17:00:59 crc kubenswrapper[5024]: W1128 17:00:59.522676 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecaf8d7e_7f08_44c9_b980_db9180876825.slice/crio-73f67a43a1d0e9229ae2c8d3d78d857d57963d7beb93e5636f66007510d1e210 WatchSource:0}: Error finding container 73f67a43a1d0e9229ae2c8d3d78d857d57963d7beb93e5636f66007510d1e210: Status 404 returned error can't find the container with id 73f67a43a1d0e9229ae2c8d3d78d857d57963d7beb93e5636f66007510d1e210 Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.569629 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:59 crc kubenswrapper[5024]: E1128 17:00:59.569922 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.069848475 +0000 UTC m=+162.118769390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.570045 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:59 crc kubenswrapper[5024]: E1128 17:00:59.571493 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.071480341 +0000 UTC m=+162.120401326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.671235 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:59 crc kubenswrapper[5024]: E1128 17:00:59.671460 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.171427725 +0000 UTC m=+162.220348630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.671623 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:59 crc kubenswrapper[5024]: E1128 17:00:59.672159 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.172134175 +0000 UTC m=+162.221055150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.834111 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:59 crc kubenswrapper[5024]: E1128 17:00:59.834310 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.334267299 +0000 UTC m=+162.383188204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.834804 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:00:59 crc kubenswrapper[5024]: E1128 17:00:59.835263 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.335238396 +0000 UTC m=+162.384159301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:59 crc kubenswrapper[5024]: I1128 17:00:59.936010 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:59 crc kubenswrapper[5024]: E1128 17:00:59.936620 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.43659114 +0000 UTC m=+162.485512045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.037392 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:00 crc kubenswrapper[5024]: E1128 17:01:00.037860 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.537840191 +0000 UTC m=+162.586761096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.138576 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:00 crc kubenswrapper[5024]: E1128 17:01:00.138820 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.638782434 +0000 UTC m=+162.687703339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.139347 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:00 crc kubenswrapper[5024]: E1128 17:01:00.139889 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.639869255 +0000 UTC m=+162.688790160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.240239 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:00 crc kubenswrapper[5024]: E1128 17:01:00.240558 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.740534059 +0000 UTC m=+162.789454974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.240648 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:00 crc kubenswrapper[5024]: E1128 17:01:00.241575 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.741563858 +0000 UTC m=+162.790484763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.343742 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:00 crc kubenswrapper[5024]: E1128 17:01:00.343946 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.843911639 +0000 UTC m=+162.892832544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.344108 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:00 crc kubenswrapper[5024]: E1128 17:01:00.344536 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.844524847 +0000 UTC m=+162.893445762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.423336 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:00 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:00 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:00 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.423413 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.446097 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:00 crc kubenswrapper[5024]: E1128 17:01:00.446614 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:00.946591811 +0000 UTC m=+162.995512716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.519892 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" event={"ID":"005aa4d7-4177-4a67-abeb-ff0c25b0ae9b","Type":"ContainerStarted","Data":"dfa9e0ed20208c2a7b7d1b6b4aad1ba2ce21fd29fd11969f7d402233360ca27d"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.521413 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc" event={"ID":"46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1","Type":"ContainerStarted","Data":"31a0654e07c256df27e5eb77f580367ae5ea64a5c017473390dcd11ef4f3a928"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.523345 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j" event={"ID":"109070e7-9a47-4d07-843f-3dbccb271ecd","Type":"ContainerStarted","Data":"b77388cbaa5f3d25eef0a4109d32960f06f0addbd6a7daf76452bc9402389e66"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.529092 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" event={"ID":"09ca800c-f2da-4db9-8570-a3605b84835e","Type":"ContainerStarted","Data":"c4cd731a056db986dfdeff273269a1b61fbae21659aa9e7351db841927eac51b"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.530244 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" event={"ID":"09c095d1-717c-43f6-9022-f46530bac373","Type":"ContainerStarted","Data":"8cc405e92ce089b471a4bfa88b78aa7f64d656a4ca32b73deb31553e9be768f4"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.531009 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cw8g" event={"ID":"10abaa97-056b-4cd6-adbb-36b64dcef7cd","Type":"ContainerStarted","Data":"3a3b36ccda5c457bf05a375ff40292f40e88b4ded2cf5c478972157a43922185"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.541647 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-t5nkh" event={"ID":"8d5fa786-7bad-487b-8b04-53bc1849d41a","Type":"ContainerStarted","Data":"03527410ddd6f168dbed608402d7c4895048db7d250bfb7ed5b09366f02fa1eb"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.555564 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:00 crc kubenswrapper[5024]: E1128 17:01:00.556886 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:01.056868969 +0000 UTC m=+163.105790064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.557693 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x56ns" event={"ID":"eb689c5a-3342-4dd0-ba63-30477d447ac4","Type":"ContainerStarted","Data":"b385abc1c46db6ca7657de03a81031a6d45a97aeddb76eb931a690adf7447c02"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.560471 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4pxb8" event={"ID":"af33dff7-bbd3-42d1-9995-c5c008e56e01","Type":"ContainerStarted","Data":"2f1cec45c133e404eda80b013d38a551de190642d95f53d9bb09eb5d080ff756"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.585633 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zgtq6" event={"ID":"78b42959-ba28-4734-b550-04e7d70496b8","Type":"ContainerStarted","Data":"11df4f9658386317337870f98b7794806fc6612c0a133cf242ef5bca9b5bb2ac"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.611184 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-v4b2j" podStartSLOduration=141.611165564 podStartE2EDuration="2m21.611165564s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:00.555226822 +0000 UTC m=+162.604147727" watchObservedRunningTime="2025-11-28 17:01:00.611165564 +0000 UTC m=+162.660086469" Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.611706 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-t5nkh" podStartSLOduration=7.61170192 podStartE2EDuration="7.61170192s" podCreationTimestamp="2025-11-28 17:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:00.608933251 +0000 UTC m=+162.657854176" watchObservedRunningTime="2025-11-28 17:01:00.61170192 +0000 UTC m=+162.660622825" Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.629594 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw" event={"ID":"f1466f5c-7d00-415a-9a1a-d2f694a6ac17","Type":"ContainerStarted","Data":"95ae5cb5baea005bd5d48861741600762d775caf5296ebe8277b094cb1250eba"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.632000 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dqkhr" event={"ID":"0ebab130-4c94-441a-90a2-a20310673821","Type":"ContainerStarted","Data":"67f3cfc4babf603b7b2a969a98d421afb7b757a9d0407fa596093a31d2bec13d"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.632898 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" event={"ID":"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a","Type":"ContainerStarted","Data":"7442ee939586d779a31f3c6be3650dfdcd22531483388d7e18b91486f1a17fca"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.634424 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" event={"ID":"c1be805d-70ab-4dfa-aa6f-23b846d64124","Type":"ContainerStarted","Data":"e70a7689deee018cda620772c6f53d161457f81eee01f821f794fd3e80408e2b"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.635682 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl" event={"ID":"c8f36997-26e7-43a4-9507-afe1d393ee29","Type":"ContainerStarted","Data":"c59e624ef3223bd447d348267e1e93a2a2f6b7ebcfcae2028f990e4c137c5171"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.644963 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4bww5" event={"ID":"a471ed62-1700-448f-a592-568efaafca96","Type":"ContainerStarted","Data":"532e67ac738e575743219edf88c9453ca783c63ff94df4c3473184887df855f5"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.654936 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wkhmw" podStartSLOduration=141.654914509 podStartE2EDuration="2m21.654914509s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:00.652034687 +0000 UTC m=+162.700955592" watchObservedRunningTime="2025-11-28 17:01:00.654914509 +0000 UTC m=+162.703835424" Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.656933 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:00 crc kubenswrapper[5024]: E1128 17:01:00.657981 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:01.157954496 +0000 UTC m=+163.206875391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.680702 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6xbnl" podStartSLOduration=141.680678602 podStartE2EDuration="2m21.680678602s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:00.677815221 +0000 UTC m=+162.726736126" watchObservedRunningTime="2025-11-28 17:01:00.680678602 +0000 UTC m=+162.729599507" Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.688825 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" event={"ID":"8103913c-f8ff-410d-8181-617787247ac0","Type":"ContainerStarted","Data":"2aab0dde4f4b18b40d3820035e62050359c76119a6349cb8ca0a4a7a4e435326"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.693356 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" event={"ID":"73be74ad-f659-4b81-b809-266f951e4994","Type":"ContainerStarted","Data":"e970106048f55fe3bdc650da18515f9f64b3a4083b494bedab92473ebe10cb81"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.719631 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh" event={"ID":"0d965e97-c291-48c5-9be5-188c921a0350","Type":"ContainerStarted","Data":"2716b1afd5905af76a2821e2f1c4572d4c95af0269ed7a8811902ad64fb69336"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.720669 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-vk6x4" podStartSLOduration=141.72064924 podStartE2EDuration="2m21.72064924s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:00.711610273 +0000 UTC m=+162.760531178" watchObservedRunningTime="2025-11-28 17:01:00.72064924 +0000 UTC m=+162.769570145" Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.759432 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:00 crc kubenswrapper[5024]: E1128 17:01:00.760787 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:01.260760791 +0000 UTC m=+163.309681686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.791453 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq" event={"ID":"aba788cd-c369-417f-a2b5-fb92019fc864","Type":"ContainerStarted","Data":"d89436b138d1b346ab3ee9959a218b617d7153a02d900a47a81f0c4b58fff530"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.860933 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:00 crc kubenswrapper[5024]: E1128 17:01:00.861356 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:01.361339353 +0000 UTC m=+163.410260248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.890342 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" event={"ID":"80a843cd-6141-431e-83c1-a7ce0110e31f","Type":"ContainerStarted","Data":"ae81daa2c7c1fbfa0f7b6dbb689378384aae840a496609b93eac095058c05013"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.891840 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" event={"ID":"bd5edd56-6cd5-4268-8728-0ba97f2e5cca","Type":"ContainerStarted","Data":"2e9319f8fed7b42e21620b2dd74d65380dd80bc491430252f5c31641c4bc8db8"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.892824 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n" event={"ID":"9afc0a0f-ea3f-41c4-8196-85b09cca5655","Type":"ContainerStarted","Data":"623402d13e2ede8b361efd9f2b99a076a046f550366309fbe724284336f6a6cd"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.892849 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n" event={"ID":"9afc0a0f-ea3f-41c4-8196-85b09cca5655","Type":"ContainerStarted","Data":"b6c64acb917c352727eddabca84dcbad4eff6701efe2e4b8936546908c023490"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.894365 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" event={"ID":"c63a391a-52c5-4121-b857-052c0962cf5a","Type":"ContainerStarted","Data":"355c175592b824e6ab86253cf8deeb69380efdce22401ddbdcdedacaea008be5"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.898301 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vj7pt" event={"ID":"ab8f76d6-5ca4-4197-b6df-87fe4d019383","Type":"ContainerStarted","Data":"5bca6e93d34441ab8203790772c033c991ae75ef9f4f70cb46c9bad33b253f93"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.901760 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.911148 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" event={"ID":"61ee1d79-90be-4c28-b765-806f010f4665","Type":"ContainerStarted","Data":"5ec42d4248347bf8a57155c363bfa23d5d74ed00ddf88a7f5697e4fb0f2d0ce4"} Nov 28 17:01:00 crc kubenswrapper[5024]: I1128 17:01:00.966751 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:00 crc kubenswrapper[5024]: E1128 17:01:00.988691 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:01.488668886 +0000 UTC m=+163.537589791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.027774 5024 patch_prober.go:28] interesting pod/console-operator-58897d9998-vj7pt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.027868 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vj7pt" podUID="ab8f76d6-5ca4-4197-b6df-87fe4d019383" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.053966 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gmrg6" podStartSLOduration=143.053917883 podStartE2EDuration="2m23.053917883s" podCreationTimestamp="2025-11-28 16:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:00.769771247 +0000 UTC m=+162.818692162" watchObservedRunningTime="2025-11-28 17:01:01.053917883 +0000 UTC m=+163.102838788" Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.076910 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:01 crc kubenswrapper[5024]: E1128 17:01:01.080682 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:01.580636953 +0000 UTC m=+163.629557868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.113467 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9jlxs" event={"ID":"13151297-dd89-4f46-8614-04670773ad2b","Type":"ContainerStarted","Data":"45fc583b10b4cc4c89b6908d608e7a49d19713a802beedfdfa7258eea06a01ea"} Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.140802 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" event={"ID":"ed40ac73-afc2-4dae-9364-e6775923e031","Type":"ContainerStarted","Data":"180e92f4300723a0fb7360497bad8275d1eee2638251be8e80db50a930ee763a"} Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.153629 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" event={"ID":"231e7091-0809-44e9-9d1a-d5a1ea092a64","Type":"ContainerStarted","Data":"bc21249d02f9c398c1a5ee9803f1b19752c4c0f6419a7f973cf32fc404cbb3f5"} Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.154511 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.170330 5024 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7jhtl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.170414 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" podUID="231e7091-0809-44e9-9d1a-d5a1ea092a64" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.179889 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.192370 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" event={"ID":"18150739-785b-44d6-8d0b-6f73eb45e9a7","Type":"ContainerStarted","Data":"3088e50688d1b7e8eac8fc933f71e46822704c1c0081c5facf9d0bb7a6483e58"} Nov 28 17:01:01 crc kubenswrapper[5024]: E1128 17:01:01.199946 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:01.699921217 +0000 UTC m=+163.748842122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.213576 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-vj7pt" podStartSLOduration=142.213548345 podStartE2EDuration="2m22.213548345s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:01.058298358 +0000 UTC m=+163.107219263" watchObservedRunningTime="2025-11-28 17:01:01.213548345 +0000 UTC m=+163.262469250" Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.214762 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-msz56" event={"ID":"ecaf8d7e-7f08-44c9-b980-db9180876825","Type":"ContainerStarted","Data":"73f67a43a1d0e9229ae2c8d3d78d857d57963d7beb93e5636f66007510d1e210"} Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.216469 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.221601 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.221655 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.246461 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.277957 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.284259 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:01 crc kubenswrapper[5024]: E1128 17:01:01.290665 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:01.790629508 +0000 UTC m=+163.839550413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.384688 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c7d2s" podStartSLOduration=142.384665134 podStartE2EDuration="2m22.384665134s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:01.214580335 +0000 UTC m=+163.263501240" watchObservedRunningTime="2025-11-28 17:01:01.384665134 +0000 UTC m=+163.433586039" Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.385475 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" podStartSLOduration=143.385468267 podStartE2EDuration="2m23.385468267s" podCreationTimestamp="2025-11-28 16:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:01.383044758 +0000 UTC m=+163.431965663" watchObservedRunningTime="2025-11-28 17:01:01.385468267 +0000 UTC m=+163.434389172" Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.399844 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:01 crc kubenswrapper[5024]: E1128 17:01:01.404993 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:01.904974022 +0000 UTC m=+163.953894927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.443928 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:01 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:01 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:01 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.444031 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.513091 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:01 crc kubenswrapper[5024]: E1128 17:01:01.513570 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:02.013536001 +0000 UTC m=+164.062456906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.615237 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:01 crc kubenswrapper[5024]: E1128 17:01:01.615744 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:02.115722799 +0000 UTC m=+164.164643704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.725138 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:01 crc kubenswrapper[5024]: E1128 17:01:01.725786 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:02.2257599 +0000 UTC m=+164.274680815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.827369 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:01 crc kubenswrapper[5024]: E1128 17:01:01.827719 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:02.327707361 +0000 UTC m=+164.376628266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.928910 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:01 crc kubenswrapper[5024]: E1128 17:01:01.929158 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:02.429117437 +0000 UTC m=+164.478038342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:01 crc kubenswrapper[5024]: I1128 17:01:01.929242 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:01 crc kubenswrapper[5024]: E1128 17:01:01.929640 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:02.429623391 +0000 UTC m=+164.478544296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.030114 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:02 crc kubenswrapper[5024]: E1128 17:01:02.031052 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:02.531013736 +0000 UTC m=+164.579934641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.149295 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:02 crc kubenswrapper[5024]: E1128 17:01:02.149841 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:02.649822927 +0000 UTC m=+164.698743842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.263927 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:02 crc kubenswrapper[5024]: E1128 17:01:02.264670 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:02.764629233 +0000 UTC m=+164.813550138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.359063 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq" event={"ID":"aba788cd-c369-417f-a2b5-fb92019fc864","Type":"ContainerStarted","Data":"c44a238ca354692305a6102a1eacc8040430d668ed58736304b7e97e552dab66"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.365403 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" event={"ID":"09c095d1-717c-43f6-9022-f46530bac373","Type":"ContainerStarted","Data":"8aa0671340a182350f77ef36ebaa92afc9d098fca0ec12cc5441fd743aca66f5"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.365645 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:02 crc kubenswrapper[5024]: E1128 17:01:02.366096 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:02.86607848 +0000 UTC m=+164.914999375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.369688 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" event={"ID":"18150739-785b-44d6-8d0b-6f73eb45e9a7","Type":"ContainerStarted","Data":"57812aa255b4515b83b8b6891d786ef8174e5d363dfe6875837068003bdc4c1d"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.372415 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4bww5" event={"ID":"a471ed62-1700-448f-a592-568efaafca96","Type":"ContainerStarted","Data":"1c81248fcc6f6c56cd21ec262d4426534efab04a91095e1daf4d684f8adef11a"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.388901 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n" event={"ID":"9afc0a0f-ea3f-41c4-8196-85b09cca5655","Type":"ContainerStarted","Data":"8720b72290892481b0a000e5bc3143d7dcd91ccd9845691ad3ac839afe2766be"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.394300 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dqkhr" event={"ID":"0ebab130-4c94-441a-90a2-a20310673821","Type":"ContainerStarted","Data":"95659bfeb35688e96cff0a23d711b3bbdaa868ee197bd949ad531d85a118dbc2"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.400228 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" event={"ID":"005aa4d7-4177-4a67-abeb-ff0c25b0ae9b","Type":"ContainerStarted","Data":"12d9db7a558c4b608d7fea97969a6f51b77862ed5a7a1c58c56cda925e1d0f78"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.401582 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8f8nq" podStartSLOduration=143.401539549 podStartE2EDuration="2m23.401539549s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:02.395450866 +0000 UTC m=+164.444371771" watchObservedRunningTime="2025-11-28 17:01:02.401539549 +0000 UTC m=+164.450460484" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.405296 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh" event={"ID":"0d965e97-c291-48c5-9be5-188c921a0350","Type":"ContainerStarted","Data":"cb1da559413d5acbd16bb61c6f93afb0f37072faf13b7efc0170bb436400b232"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.415522 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" event={"ID":"a809b012-e8e1-4061-8fcf-7c9083e5569d","Type":"ContainerStarted","Data":"3587ec8e4f4a82564357ec255afa58376ebf557b38530e64ad51e4444f22f3e7"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.416534 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.429956 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" event={"ID":"c63a391a-52c5-4121-b857-052c0962cf5a","Type":"ContainerStarted","Data":"60573754a90217e8d25e3068a5a133ee539915249150ebc4f9d200373d9ad1fe"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.430170 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:02 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:02 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:02 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.430221 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.430673 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.437179 5024 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4pgzf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.437251 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" podUID="c63a391a-52c5-4121-b857-052c0962cf5a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.448393 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfqrh" podStartSLOduration=143.448366422 podStartE2EDuration="2m23.448366422s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:02.447883338 +0000 UTC m=+164.496804243" watchObservedRunningTime="2025-11-28 17:01:02.448366422 +0000 UTC m=+164.497287327" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.488884 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:02 crc kubenswrapper[5024]: E1128 17:01:02.492089 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:02.992066125 +0000 UTC m=+165.040987030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.535253 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" podStartSLOduration=143.535233263 podStartE2EDuration="2m23.535233263s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:02.53477202 +0000 UTC m=+164.583692935" watchObservedRunningTime="2025-11-28 17:01:02.535233263 +0000 UTC m=+164.584154168" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.552611 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" event={"ID":"73be74ad-f659-4b81-b809-266f951e4994","Type":"ContainerStarted","Data":"cbb59619c87f7b9505608a923d997de83fb9b46d5a0b197e1a3cda19f41aeddc"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.552658 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.552675 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" event={"ID":"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a","Type":"ContainerStarted","Data":"763f439d1b9a70e804ea009d13e823966fef4de6bd0f6ff7e2831fba5e990d9c"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.570635 5024 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-ldx2f container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.570693 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" podUID="73be74ad-f659-4b81-b809-266f951e4994" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.583052 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4pxb8" event={"ID":"af33dff7-bbd3-42d1-9995-c5c008e56e01","Type":"ContainerStarted","Data":"f7e9bf2323ca49cd97ba3c00bf482700ab542de25d5c73b95591517af98be3fd"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.592577 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:02 crc kubenswrapper[5024]: E1128 17:01:02.594932 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:03.094915202 +0000 UTC m=+165.143836107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.597684 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" podStartSLOduration=143.59766449 podStartE2EDuration="2m23.59766449s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:02.596587349 +0000 UTC m=+164.645508274" watchObservedRunningTime="2025-11-28 17:01:02.59766449 +0000 UTC m=+164.646585395" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.604955 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" event={"ID":"bd5edd56-6cd5-4268-8728-0ba97f2e5cca","Type":"ContainerStarted","Data":"1a10257811163e32b13a6412edc51294a5e44b88d1f0b4bad749926895bd7354"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.610783 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" event={"ID":"09ca800c-f2da-4db9-8570-a3605b84835e","Type":"ContainerStarted","Data":"58dc2e295a33862df6b9d29942d4287db836b09b7b6905835c30fadf5071adb8"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.615173 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cw8g" event={"ID":"10abaa97-056b-4cd6-adbb-36b64dcef7cd","Type":"ContainerStarted","Data":"2828ff133f9b034acded4ea731a70c76464086d2ea759ebce04f111e6a3b18d1"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.618714 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zgtq6" event={"ID":"78b42959-ba28-4734-b550-04e7d70496b8","Type":"ContainerStarted","Data":"6847dfb5211a8d2ba238bcc817c4cb1adc2fe4d9719299780acd5c114459d363"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.632926 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" event={"ID":"96e29661-be19-4efb-8337-661e5af2c4a2","Type":"ContainerStarted","Data":"e879463647ea5e13a4bdd68f14864142c99fc778814e14bf75beafe39e73a374"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.636688 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc" event={"ID":"46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1","Type":"ContainerStarted","Data":"90eadedb8f9807317a95c6d16b57becb8640775b8621f750a79ef7834fdb55a9"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.651598 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" podStartSLOduration=143.651571844 podStartE2EDuration="2m23.651571844s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:02.650893904 +0000 UTC m=+164.699814809" watchObservedRunningTime="2025-11-28 17:01:02.651571844 +0000 UTC m=+164.700492749" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.655433 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x56ns" event={"ID":"eb689c5a-3342-4dd0-ba63-30477d447ac4","Type":"ContainerStarted","Data":"23c33265bf4756c5c5d92e1c7cc98701b0c1b13559d30588b9fcf22bd4b5d042"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.666677 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" event={"ID":"80a843cd-6141-431e-83c1-a7ce0110e31f","Type":"ContainerStarted","Data":"213be41ff4da95b7cc71ec5360caf9eb6ff2895cf36d82f7601157b4f203b416"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.669893 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" podStartSLOduration=62.669871774 podStartE2EDuration="1m2.669871774s" podCreationTimestamp="2025-11-28 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:02.666153229 +0000 UTC m=+164.715074144" watchObservedRunningTime="2025-11-28 17:01:02.669871774 +0000 UTC m=+164.718792679" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.675482 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9jlxs" event={"ID":"13151297-dd89-4f46-8614-04670773ad2b","Type":"ContainerStarted","Data":"b9c5bcb2b7217249d39d75d658073a3121107cdafa2a7b53643b0cd7270e6a35"} Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.677223 5024 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7jhtl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.677271 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" podUID="231e7091-0809-44e9-9d1a-d5a1ea092a64" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.677673 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9jlxs" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.677986 5024 patch_prober.go:28] interesting pod/console-operator-58897d9998-vj7pt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.678347 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.678408 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.679313 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vj7pt" podUID="ab8f76d6-5ca4-4197-b6df-87fe4d019383" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.687623 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cw8g" podStartSLOduration=143.687600579 podStartE2EDuration="2m23.687600579s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:02.682579706 +0000 UTC m=+164.731500611" watchObservedRunningTime="2025-11-28 17:01:02.687600579 +0000 UTC m=+164.736521484" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.694131 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:02 crc kubenswrapper[5024]: E1128 17:01:02.694411 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:03.194380082 +0000 UTC m=+165.243300987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.694674 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:02 crc kubenswrapper[5024]: E1128 17:01:02.695970 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:03.195952727 +0000 UTC m=+165.244873632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.802626 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:02 crc kubenswrapper[5024]: E1128 17:01:02.802792 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:03.302760976 +0000 UTC m=+165.351681881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.803327 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:02 crc kubenswrapper[5024]: E1128 17:01:02.812002 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:03.311981128 +0000 UTC m=+165.360902033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.822489 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9jlxs" podStartSLOduration=9.822451126 podStartE2EDuration="9.822451126s" podCreationTimestamp="2025-11-28 17:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:02.820918293 +0000 UTC m=+164.869839208" watchObservedRunningTime="2025-11-28 17:01:02.822451126 +0000 UTC m=+164.871372031" Nov 28 17:01:02 crc kubenswrapper[5024]: I1128 17:01:02.910542 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:02 crc kubenswrapper[5024]: E1128 17:01:02.911424 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:03.411403797 +0000 UTC m=+165.460324692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.013303 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:03 crc kubenswrapper[5024]: E1128 17:01:03.013812 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:03.513797391 +0000 UTC m=+165.562718296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.114316 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:03 crc kubenswrapper[5024]: E1128 17:01:03.114573 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:03.614532087 +0000 UTC m=+165.663453002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.114753 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.114832 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs\") pod \"network-metrics-daemon-5t4kc\" (UID: \"949e234b-60b0-40e4-a423-0596dafd56c1\") " pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:01:03 crc kubenswrapper[5024]: E1128 17:01:03.115363 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:03.61534246 +0000 UTC m=+165.664263365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.122053 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/949e234b-60b0-40e4-a423-0596dafd56c1-metrics-certs\") pod \"network-metrics-daemon-5t4kc\" (UID: \"949e234b-60b0-40e4-a423-0596dafd56c1\") " pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.216061 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:03 crc kubenswrapper[5024]: E1128 17:01:03.216646 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:03.716626342 +0000 UTC m=+165.765547247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.277879 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.317730 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:03 crc kubenswrapper[5024]: E1128 17:01:03.318341 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:03.818316036 +0000 UTC m=+165.867236941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.466651 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:03 crc kubenswrapper[5024]: E1128 17:01:03.467593 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:03.967574723 +0000 UTC m=+166.016495618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.467702 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5t4kc" Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.522001 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:03 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:03 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:03 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.522104 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.568808 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:03 crc kubenswrapper[5024]: E1128 17:01:03.570520 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:04.070495772 +0000 UTC m=+166.119416677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.671139 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:03 crc kubenswrapper[5024]: E1128 17:01:03.671603 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:04.171577308 +0000 UTC m=+166.220498213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.690874 5024 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4pgzf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.690955 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" podUID="c63a391a-52c5-4121-b857-052c0962cf5a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.692864 5024 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-ldx2f container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.693121 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" podUID="73be74ad-f659-4b81-b809-266f951e4994" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.693936 5024 patch_prober.go:28] interesting pod/console-operator-58897d9998-vj7pt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.694077 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vj7pt" podUID="ab8f76d6-5ca4-4197-b6df-87fe4d019383" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.760305 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-dqkhr" podStartSLOduration=143.760276532 podStartE2EDuration="2m23.760276532s" podCreationTimestamp="2025-11-28 16:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:03.714579131 +0000 UTC m=+165.763500046" watchObservedRunningTime="2025-11-28 17:01:03.760276532 +0000 UTC m=+165.809197437" Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.773763 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:03 crc kubenswrapper[5024]: E1128 17:01:03.774711 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:04.274683742 +0000 UTC m=+166.323604857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.821685 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s8v9n" podStartSLOduration=144.821661889 podStartE2EDuration="2m24.821661889s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:03.763807622 +0000 UTC m=+165.812728527" watchObservedRunningTime="2025-11-28 17:01:03.821661889 +0000 UTC m=+165.870582814" Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.823916 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" podStartSLOduration=144.823906992 podStartE2EDuration="2m24.823906992s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:03.819719923 +0000 UTC m=+165.868640828" watchObservedRunningTime="2025-11-28 17:01:03.823906992 +0000 UTC m=+165.872827897" Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.876256 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:03 crc kubenswrapper[5024]: E1128 17:01:03.879769 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:04.37973197 +0000 UTC m=+166.428652955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:03 crc kubenswrapper[5024]: I1128 17:01:03.981261 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:03 crc kubenswrapper[5024]: E1128 17:01:03.981755 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:04.481736072 +0000 UTC m=+166.530656977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.084770 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:04 crc kubenswrapper[5024]: E1128 17:01:04.085779 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:04.585752382 +0000 UTC m=+166.634673297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.186992 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:04 crc kubenswrapper[5024]: E1128 17:01:04.187545 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:04.687528618 +0000 UTC m=+166.736449523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.290607 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:04 crc kubenswrapper[5024]: E1128 17:01:04.291573 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:04.791544268 +0000 UTC m=+166.840465173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.392730 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:04 crc kubenswrapper[5024]: E1128 17:01:04.393265 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:04.893245532 +0000 UTC m=+166.942166437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.451682 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:04 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:04 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:04 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.451770 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.493785 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:04 crc kubenswrapper[5024]: E1128 17:01:04.493973 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:04.993937797 +0000 UTC m=+167.042858702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.494218 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:04 crc kubenswrapper[5024]: E1128 17:01:04.494632 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:04.994615146 +0000 UTC m=+167.043536051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.595314 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:04 crc kubenswrapper[5024]: E1128 17:01:04.596415 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:05.096374102 +0000 UTC m=+167.145295007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.775032 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:04 crc kubenswrapper[5024]: E1128 17:01:04.775901 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:05.2758835 +0000 UTC m=+167.324804405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.796752 5024 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-v9pk8 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.796850 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" podUID="a809b012-e8e1-4061-8fcf-7c9083e5569d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.796855 5024 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-v9pk8 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.796932 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" podUID="a809b012-e8e1-4061-8fcf-7c9083e5569d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.861055 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc" event={"ID":"46f54d87-6d5a-4c5d-ac6d-33b33fcc16a1","Type":"ContainerStarted","Data":"de18947a03bb5d95b828524f94c02e29ccc30112e261ea18400006d693b269bc"} Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.862635 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc" Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.883811 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:04 crc kubenswrapper[5024]: E1128 17:01:04.884376 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:05.384352847 +0000 UTC m=+167.433273762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.909188 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x56ns" event={"ID":"eb689c5a-3342-4dd0-ba63-30477d447ac4","Type":"ContainerStarted","Data":"838fecbc9285ac9233e82e8a07cd060b621d3da35c8a3b9688bfa89e8d5f8161"} Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.945102 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4pxb8" event={"ID":"af33dff7-bbd3-42d1-9995-c5c008e56e01","Type":"ContainerStarted","Data":"d3a6d7509dad984285fe8c0e6652de1b8a27fdeef57db3221d0f6ebbc5eea6e3"} Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.954594 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc" podStartSLOduration=145.954572945 podStartE2EDuration="2m25.954572945s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:04.953877855 +0000 UTC m=+167.002798760" watchObservedRunningTime="2025-11-28 17:01:04.954572945 +0000 UTC m=+167.003493850" Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.973287 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" event={"ID":"09ca800c-f2da-4db9-8570-a3605b84835e","Type":"ContainerStarted","Data":"759b32227454733b5e67c0f4a5fc25d3f5c212e8f8d0d466ead5b37dcca297bb"} Nov 28 17:01:04 crc kubenswrapper[5024]: I1128 17:01:04.988815 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:04 crc kubenswrapper[5024]: E1128 17:01:04.989438 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:05.489418926 +0000 UTC m=+167.538339831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.001613 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-msz56" event={"ID":"ecaf8d7e-7f08-44c9-b980-db9180876825","Type":"ContainerStarted","Data":"4c602cbfcfddc4ab77106180e34ea13a03f906586a5a2badf36b8be6fb541c80"} Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.007137 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4bww5" event={"ID":"a471ed62-1700-448f-a592-568efaafca96","Type":"ContainerStarted","Data":"6585a434f05dcf7d30b747f0337d41130264ff29a8176bcffa5a9376ce5115c8"} Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.017897 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-4pxb8" podStartSLOduration=146.017871516 podStartE2EDuration="2m26.017871516s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:05.017360971 +0000 UTC m=+167.066281876" watchObservedRunningTime="2025-11-28 17:01:05.017871516 +0000 UTC m=+167.066792421" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.023327 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" event={"ID":"005aa4d7-4177-4a67-abeb-ff0c25b0ae9b","Type":"ContainerStarted","Data":"5f00a0a917c77331bb9de6d247d78599f687e3adfc535e28a1109090d61d04a3"} Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.059877 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" event={"ID":"ed40ac73-afc2-4dae-9364-e6775923e031","Type":"ContainerStarted","Data":"add79e6ef25d73f76e60c7b2d0d2f611800c3b79d21257ff2e0d51682f31e5e2"} Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.061453 5024 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-v9pk8 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.061495 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" podUID="a809b012-e8e1-4061-8fcf-7c9083e5569d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.089986 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x56ns" podStartSLOduration=146.089964798 podStartE2EDuration="2m26.089964798s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:05.088406853 +0000 UTC m=+167.137327758" watchObservedRunningTime="2025-11-28 17:01:05.089964798 +0000 UTC m=+167.138885703" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.090599 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:05 crc kubenswrapper[5024]: E1128 17:01:05.091291 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:05.591253884 +0000 UTC m=+167.640174839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.192643 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:05 crc kubenswrapper[5024]: E1128 17:01:05.196012 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:05.695984334 +0000 UTC m=+167.744905439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.297099 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:05 crc kubenswrapper[5024]: E1128 17:01:05.297625 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:05.797603336 +0000 UTC m=+167.846524241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.399544 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-4bww5" podStartSLOduration=146.399512326 podStartE2EDuration="2m26.399512326s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:05.396612953 +0000 UTC m=+167.445533858" watchObservedRunningTime="2025-11-28 17:01:05.399512326 +0000 UTC m=+167.448433231" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.420664 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-zgtq6" podStartSLOduration=12.420633027 podStartE2EDuration="12.420633027s" podCreationTimestamp="2025-11-28 17:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:05.254543011 +0000 UTC m=+167.303463926" watchObservedRunningTime="2025-11-28 17:01:05.420633027 +0000 UTC m=+167.469553932" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.400800 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:05 crc kubenswrapper[5024]: E1128 17:01:05.401271 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:05.901248965 +0000 UTC m=+167.950169870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.463420 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:05 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:05 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:05 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.463503 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.521539 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.521897 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.523193 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:05 crc kubenswrapper[5024]: E1128 17:01:05.523742 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:06.02371793 +0000 UTC m=+168.072638835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.596340 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.596416 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.596523 5024 patch_prober.go:28] interesting pod/apiserver-76f77b778f-6jk4g container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.596540 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" podUID="ed40ac73-afc2-4dae-9364-e6775923e031" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.601911 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.601988 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.624917 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" podStartSLOduration=147.624894169 podStartE2EDuration="2m27.624894169s" podCreationTimestamp="2025-11-28 16:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:05.624606691 +0000 UTC m=+167.673527616" watchObservedRunningTime="2025-11-28 17:01:05.624894169 +0000 UTC m=+167.673815074" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.625668 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:05 crc kubenswrapper[5024]: E1128 17:01:05.626167 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:06.126150455 +0000 UTC m=+168.175071360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.729771 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.729862 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.730367 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:05 crc kubenswrapper[5024]: E1128 17:01:05.730775 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:06.230753511 +0000 UTC m=+168.279674416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.811917 5024 patch_prober.go:28] interesting pod/console-f9d7485db-r7n7g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.812008 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-r7n7g" podUID="f84f4343-2000-4b50-9650-22953ca7d39d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.844927 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.845280 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.845583 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:05 crc kubenswrapper[5024]: E1128 17:01:05.846201 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:06.346177916 +0000 UTC m=+168.395099001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.853351 5024 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-qx48m container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.853419 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" podUID="96e29661-be19-4efb-8337-661e5af2c4a2" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 28 17:01:05 crc kubenswrapper[5024]: I1128 17:01:05.961133 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:05 crc kubenswrapper[5024]: E1128 17:01:05.961387 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:06.461364703 +0000 UTC m=+168.510285598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.065262 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:06 crc kubenswrapper[5024]: E1128 17:01:06.066141 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:06.566123864 +0000 UTC m=+168.615044769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.098779 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" podStartSLOduration=147.098211887 podStartE2EDuration="2m27.098211887s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:06.095727086 +0000 UTC m=+168.144648001" watchObservedRunningTime="2025-11-28 17:01:06.098211887 +0000 UTC m=+168.147132802" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.111080 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5t4kc"] Nov 28 17:01:06 crc kubenswrapper[5024]: W1128 17:01:06.123947 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod949e234b_60b0_40e4_a423_0596dafd56c1.slice/crio-220587dd5c129216364d9b97a3853490e89d699975d9e55fe932fb6fa7a00cad WatchSource:0}: Error finding container 220587dd5c129216364d9b97a3853490e89d699975d9e55fe932fb6fa7a00cad: Status 404 returned error can't find the container with id 220587dd5c129216364d9b97a3853490e89d699975d9e55fe932fb6fa7a00cad Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.167630 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:06 crc kubenswrapper[5024]: E1128 17:01:06.169336 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:06.66930726 +0000 UTC m=+168.718228335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.273913 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:06 crc kubenswrapper[5024]: E1128 17:01:06.274323 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:06.774309308 +0000 UTC m=+168.823230213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.344478 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-kkcnh" podStartSLOduration=147.344446434 podStartE2EDuration="2m27.344446434s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:06.237649505 +0000 UTC m=+168.286570410" watchObservedRunningTime="2025-11-28 17:01:06.344446434 +0000 UTC m=+168.393367339" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.374769 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:06 crc kubenswrapper[5024]: E1128 17:01:06.375092 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:06.875067165 +0000 UTC m=+168.923988070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.409610 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" podStartSLOduration=147.409583097 podStartE2EDuration="2m27.409583097s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:06.39597025 +0000 UTC m=+168.444891145" watchObservedRunningTime="2025-11-28 17:01:06.409583097 +0000 UTC m=+168.458504002" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.414784 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wn4qw" podStartSLOduration=147.414750924 podStartE2EDuration="2m27.414750924s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:06.346818791 +0000 UTC m=+168.395739696" watchObservedRunningTime="2025-11-28 17:01:06.414750924 +0000 UTC m=+168.463671829" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.419126 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.439295 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:06 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:06 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:06 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.439363 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.440644 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j485j" podStartSLOduration=147.44062104 podStartE2EDuration="2m27.44062104s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:06.440035694 +0000 UTC m=+168.488956599" watchObservedRunningTime="2025-11-28 17:01:06.44062104 +0000 UTC m=+168.489541945" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.476508 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:06 crc kubenswrapper[5024]: E1128 17:01:06.476968 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:06.976946674 +0000 UTC m=+169.025867579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.490550 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prxwd" podStartSLOduration=147.49052398 podStartE2EDuration="2m27.49052398s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:06.488914565 +0000 UTC m=+168.537835470" watchObservedRunningTime="2025-11-28 17:01:06.49052398 +0000 UTC m=+168.539444885" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.584846 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:06 crc kubenswrapper[5024]: E1128 17:01:06.585262 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:07.085244176 +0000 UTC m=+169.134165081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.688147 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:06 crc kubenswrapper[5024]: E1128 17:01:06.688601 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:07.188586476 +0000 UTC m=+169.237507381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.730993 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.731809 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.737767 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.737785 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.752923 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.789944 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:06 crc kubenswrapper[5024]: E1128 17:01:06.793435 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:07.290654991 +0000 UTC m=+169.339575896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.793522 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af94043b-2d01-4b6c-b384-af1a3b65ffba-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"af94043b-2d01-4b6c-b384-af1a3b65ffba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.793734 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af94043b-2d01-4b6c-b384-af1a3b65ffba-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"af94043b-2d01-4b6c-b384-af1a3b65ffba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.834561 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.835696 5024 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6p4ff container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.835739 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" podUID="80a843cd-6141-431e-83c1-a7ce0110e31f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.836095 5024 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6p4ff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.836124 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" podUID="80a843cd-6141-431e-83c1-a7ce0110e31f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.836301 5024 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6p4ff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.836326 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" podUID="80a843cd-6141-431e-83c1-a7ce0110e31f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.894539 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af94043b-2d01-4b6c-b384-af1a3b65ffba-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"af94043b-2d01-4b6c-b384-af1a3b65ffba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.894628 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.894722 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af94043b-2d01-4b6c-b384-af1a3b65ffba-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"af94043b-2d01-4b6c-b384-af1a3b65ffba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:06 crc kubenswrapper[5024]: I1128 17:01:06.894755 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af94043b-2d01-4b6c-b384-af1a3b65ffba-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"af94043b-2d01-4b6c-b384-af1a3b65ffba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:06 crc kubenswrapper[5024]: E1128 17:01:06.895317 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:07.395291988 +0000 UTC m=+169.444212893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.012077 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:07 crc kubenswrapper[5024]: E1128 17:01:07.012359 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:07.512337449 +0000 UTC m=+169.561258354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.065297 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af94043b-2d01-4b6c-b384-af1a3b65ffba-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"af94043b-2d01-4b6c-b384-af1a3b65ffba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.133904 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.141658 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:07 crc kubenswrapper[5024]: E1128 17:01:07.142178 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:07.642163643 +0000 UTC m=+169.691084548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.144547 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" event={"ID":"949e234b-60b0-40e4-a423-0596dafd56c1","Type":"ContainerStarted","Data":"bb1c791929f8443446afe4226ff5cf52405d83fc06efae613290d56611908554"} Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.144605 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" event={"ID":"949e234b-60b0-40e4-a423-0596dafd56c1","Type":"ContainerStarted","Data":"220587dd5c129216364d9b97a3853490e89d699975d9e55fe932fb6fa7a00cad"} Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.157642 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-msz56" event={"ID":"ecaf8d7e-7f08-44c9-b980-db9180876825","Type":"ContainerStarted","Data":"c223d68bff7536c4e1db799d94c2d874a0e430eec52a64e33b3714f07b1c7013"} Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.198913 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.204322 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ldx2f" Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.243315 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:07 crc kubenswrapper[5024]: E1128 17:01:07.245909 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:07.745841703 +0000 UTC m=+169.794762608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.262875 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:07 crc kubenswrapper[5024]: E1128 17:01:07.269634 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:07.769602999 +0000 UTC m=+169.818523904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.323357 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gthl8" Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.363688 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:07 crc kubenswrapper[5024]: E1128 17:01:07.364130 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:07.863980124 +0000 UTC m=+169.912901029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.439311 5024 patch_prober.go:28] interesting pod/console-operator-58897d9998-vj7pt container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.439383 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-vj7pt" podUID="ab8f76d6-5ca4-4197-b6df-87fe4d019383" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.443111 5024 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7jhtl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.443160 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" podUID="231e7091-0809-44e9-9d1a-d5a1ea092a64" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.464874 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:07 crc kubenswrapper[5024]: E1128 17:01:07.465321 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:07.965304127 +0000 UTC m=+170.014225022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.470607 5024 patch_prober.go:28] interesting pod/console-operator-58897d9998-vj7pt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.470687 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vj7pt" podUID="ab8f76d6-5ca4-4197-b6df-87fe4d019383" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.471269 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:07 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:07 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:07 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.471347 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.569117 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:07 crc kubenswrapper[5024]: E1128 17:01:07.570397 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.070349146 +0000 UTC m=+170.119270051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.572170 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.572222 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.602344 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4pgzf" Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.673740 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:07 crc kubenswrapper[5024]: E1128 17:01:07.674503 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.174482449 +0000 UTC m=+170.223403354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.776245 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:07 crc kubenswrapper[5024]: E1128 17:01:07.776693 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.276670657 +0000 UTC m=+170.325591562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.878572 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:07 crc kubenswrapper[5024]: E1128 17:01:07.878921 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.378907106 +0000 UTC m=+170.427828011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[5024]: I1128 17:01:07.979598 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:07 crc kubenswrapper[5024]: E1128 17:01:07.980030 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.479988592 +0000 UTC m=+170.528909497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.081456 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:08 crc kubenswrapper[5024]: E1128 17:01:08.082225 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.582196221 +0000 UTC m=+170.631117126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.166652 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5t4kc" event={"ID":"949e234b-60b0-40e4-a423-0596dafd56c1","Type":"ContainerStarted","Data":"6202495d7b27101aa05d83234f349101ffce412df341fa33bb1d3d07d2ffda31"} Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.169475 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-msz56" event={"ID":"ecaf8d7e-7f08-44c9-b980-db9180876825","Type":"ContainerStarted","Data":"728ae925a35e3f0e74b8be02d2f48535180afc6044afcb8f828fb6d84c6c38f4"} Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.188124 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:08 crc kubenswrapper[5024]: E1128 17:01:08.188548 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.688526496 +0000 UTC m=+170.737447411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.217312 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5t4kc" podStartSLOduration=149.217283975 podStartE2EDuration="2m29.217283975s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:08.21677151 +0000 UTC m=+170.265692435" watchObservedRunningTime="2025-11-28 17:01:08.217283975 +0000 UTC m=+170.266204880" Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.283329 5024 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.292389 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:08 crc kubenswrapper[5024]: E1128 17:01:08.292903 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.792887616 +0000 UTC m=+170.841808511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.393184 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:08 crc kubenswrapper[5024]: E1128 17:01:08.393755 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.893732835 +0000 UTC m=+170.942653740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.424258 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:08 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:08 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:08 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.424352 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.496592 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:08 crc kubenswrapper[5024]: E1128 17:01:08.497115 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.997096097 +0000 UTC m=+171.046017002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.597937 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:08 crc kubenswrapper[5024]: E1128 17:01:08.598229 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.098187333 +0000 UTC m=+171.147108238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.598742 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:08 crc kubenswrapper[5024]: E1128 17:01:08.599134 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.09911909 +0000 UTC m=+171.148039995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.674633 5024 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-v9pk8 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.674705 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" podUID="a809b012-e8e1-4061-8fcf-7c9083e5569d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.675270 5024 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-v9pk8 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.675355 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" podUID="a809b012-e8e1-4061-8fcf-7c9083e5569d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.700798 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:08 crc kubenswrapper[5024]: E1128 17:01:08.701227 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.201208235 +0000 UTC m=+171.250129140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.804125 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:08 crc kubenswrapper[5024]: E1128 17:01:08.804567 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.304551135 +0000 UTC m=+171.353472040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.912238 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:08 crc kubenswrapper[5024]: E1128 17:01:08.912562 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.412544408 +0000 UTC m=+171.461465313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.912590 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rc8qm"] Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.913710 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.925569 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kx8x6"] Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.936556 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.947997 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kx8x6"] Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.948170 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:01:08 crc kubenswrapper[5024]: I1128 17:01:08.962576 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.008125 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j64mb"] Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.009604 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.015527 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fae0fa8-8183-4e44-afed-63a655dd82c5-utilities\") pod \"community-operators-rc8qm\" (UID: \"8fae0fa8-8183-4e44-afed-63a655dd82c5\") " pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.015776 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0db523-f690-4c23-8324-b417a8ccd4b2-catalog-content\") pod \"certified-operators-kx8x6\" (UID: \"2a0db523-f690-4c23-8324-b417a8ccd4b2\") " pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.015892 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzhws\" (UniqueName: \"kubernetes.io/projected/2a0db523-f690-4c23-8324-b417a8ccd4b2-kube-api-access-fzhws\") pod \"certified-operators-kx8x6\" (UID: \"2a0db523-f690-4c23-8324-b417a8ccd4b2\") " pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.015986 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.016073 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fae0fa8-8183-4e44-afed-63a655dd82c5-catalog-content\") pod \"community-operators-rc8qm\" (UID: \"8fae0fa8-8183-4e44-afed-63a655dd82c5\") " pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.016166 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52d7w\" (UniqueName: \"kubernetes.io/projected/8fae0fa8-8183-4e44-afed-63a655dd82c5-kube-api-access-52d7w\") pod \"community-operators-rc8qm\" (UID: \"8fae0fa8-8183-4e44-afed-63a655dd82c5\") " pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.016259 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0db523-f690-4c23-8324-b417a8ccd4b2-utilities\") pod \"certified-operators-kx8x6\" (UID: \"2a0db523-f690-4c23-8324-b417a8ccd4b2\") " pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:01:09 crc kubenswrapper[5024]: E1128 17:01:09.016900 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.516877637 +0000 UTC m=+171.565798542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.024888 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.025772 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.025973 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rc8qm"] Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.030468 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.031660 5024 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-28T17:01:08.283372505Z","Handler":null,"Name":""} Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.043783 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.046802 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j64mb"] Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.079138 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.103703 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2wdtz"] Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.104920 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.117235 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:09 crc kubenswrapper[5024]: E1128 17:01:09.117513 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.617470539 +0000 UTC m=+171.666391444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.117659 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce7e31f1-782f-4618-9400-4049b5d50ae5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ce7e31f1-782f-4618-9400-4049b5d50ae5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.117712 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.117744 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fae0fa8-8183-4e44-afed-63a655dd82c5-catalog-content\") pod \"community-operators-rc8qm\" (UID: \"8fae0fa8-8183-4e44-afed-63a655dd82c5\") " pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.117774 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f10908eb-32ed-4e49-b1ea-7b627343b29d-catalog-content\") pod \"certified-operators-j64mb\" (UID: \"f10908eb-32ed-4e49-b1ea-7b627343b29d\") " pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.117800 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52d7w\" (UniqueName: \"kubernetes.io/projected/8fae0fa8-8183-4e44-afed-63a655dd82c5-kube-api-access-52d7w\") pod \"community-operators-rc8qm\" (UID: \"8fae0fa8-8183-4e44-afed-63a655dd82c5\") " pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.117834 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0db523-f690-4c23-8324-b417a8ccd4b2-utilities\") pod \"certified-operators-kx8x6\" (UID: \"2a0db523-f690-4c23-8324-b417a8ccd4b2\") " pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.117878 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqhx8\" (UniqueName: \"kubernetes.io/projected/f10908eb-32ed-4e49-b1ea-7b627343b29d-kube-api-access-cqhx8\") pod \"certified-operators-j64mb\" (UID: \"f10908eb-32ed-4e49-b1ea-7b627343b29d\") " pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.117948 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f10908eb-32ed-4e49-b1ea-7b627343b29d-utilities\") pod \"certified-operators-j64mb\" (UID: \"f10908eb-32ed-4e49-b1ea-7b627343b29d\") " pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.117983 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fae0fa8-8183-4e44-afed-63a655dd82c5-utilities\") pod \"community-operators-rc8qm\" (UID: \"8fae0fa8-8183-4e44-afed-63a655dd82c5\") " pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:01:09 crc kubenswrapper[5024]: E1128 17:01:09.118091 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.618071526 +0000 UTC m=+171.666992601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n4vqb" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.118718 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0db523-f690-4c23-8324-b417a8ccd4b2-utilities\") pod \"certified-operators-kx8x6\" (UID: \"2a0db523-f690-4c23-8324-b417a8ccd4b2\") " pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.586566 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fae0fa8-8183-4e44-afed-63a655dd82c5-catalog-content\") pod \"community-operators-rc8qm\" (UID: \"8fae0fa8-8183-4e44-afed-63a655dd82c5\") " pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.586661 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0db523-f690-4c23-8324-b417a8ccd4b2-catalog-content\") pod \"certified-operators-kx8x6\" (UID: \"2a0db523-f690-4c23-8324-b417a8ccd4b2\") " pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.586715 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce7e31f1-782f-4618-9400-4049b5d50ae5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ce7e31f1-782f-4618-9400-4049b5d50ae5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.586741 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzhws\" (UniqueName: \"kubernetes.io/projected/2a0db523-f690-4c23-8324-b417a8ccd4b2-kube-api-access-fzhws\") pod \"certified-operators-kx8x6\" (UID: \"2a0db523-f690-4c23-8324-b417a8ccd4b2\") " pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.587712 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0db523-f690-4c23-8324-b417a8ccd4b2-catalog-content\") pod \"certified-operators-kx8x6\" (UID: \"2a0db523-f690-4c23-8324-b417a8ccd4b2\") " pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.587988 5024 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.588057 5024 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.594548 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fae0fa8-8183-4e44-afed-63a655dd82c5-utilities\") pod \"community-operators-rc8qm\" (UID: \"8fae0fa8-8183-4e44-afed-63a655dd82c5\") " pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.610891 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:09 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:09 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:09 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.610974 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.645494 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzhws\" (UniqueName: \"kubernetes.io/projected/2a0db523-f690-4c23-8324-b417a8ccd4b2-kube-api-access-fzhws\") pod \"certified-operators-kx8x6\" (UID: \"2a0db523-f690-4c23-8324-b417a8ccd4b2\") " pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.649226 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52d7w\" (UniqueName: \"kubernetes.io/projected/8fae0fa8-8183-4e44-afed-63a655dd82c5-kube-api-access-52d7w\") pod \"community-operators-rc8qm\" (UID: \"8fae0fa8-8183-4e44-afed-63a655dd82c5\") " pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.684638 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2wdtz"] Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.688139 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.688616 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce7e31f1-782f-4618-9400-4049b5d50ae5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ce7e31f1-782f-4618-9400-4049b5d50ae5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.688686 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f10908eb-32ed-4e49-b1ea-7b627343b29d-catalog-content\") pod \"certified-operators-j64mb\" (UID: \"f10908eb-32ed-4e49-b1ea-7b627343b29d\") " pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.688746 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcztk\" (UniqueName: \"kubernetes.io/projected/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-kube-api-access-vcztk\") pod \"community-operators-2wdtz\" (UID: \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\") " pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.688856 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqhx8\" (UniqueName: \"kubernetes.io/projected/f10908eb-32ed-4e49-b1ea-7b627343b29d-kube-api-access-cqhx8\") pod \"certified-operators-j64mb\" (UID: \"f10908eb-32ed-4e49-b1ea-7b627343b29d\") " pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.688900 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-utilities\") pod \"community-operators-2wdtz\" (UID: \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\") " pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.688956 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-catalog-content\") pod \"community-operators-2wdtz\" (UID: \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\") " pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.688995 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f10908eb-32ed-4e49-b1ea-7b627343b29d-utilities\") pod \"certified-operators-j64mb\" (UID: \"f10908eb-32ed-4e49-b1ea-7b627343b29d\") " pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.689063 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce7e31f1-782f-4618-9400-4049b5d50ae5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ce7e31f1-782f-4618-9400-4049b5d50ae5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.694147 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-msz56" event={"ID":"ecaf8d7e-7f08-44c9-b980-db9180876825","Type":"ContainerStarted","Data":"1d0c9164eb4fd16089e74b88ae8795a9132842b303aaca7fc4445907f09c4b3c"} Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.702071 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f10908eb-32ed-4e49-b1ea-7b627343b29d-catalog-content\") pod \"certified-operators-j64mb\" (UID: \"f10908eb-32ed-4e49-b1ea-7b627343b29d\") " pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.713145 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce7e31f1-782f-4618-9400-4049b5d50ae5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ce7e31f1-782f-4618-9400-4049b5d50ae5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.714154 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f10908eb-32ed-4e49-b1ea-7b627343b29d-utilities\") pod \"certified-operators-j64mb\" (UID: \"f10908eb-32ed-4e49-b1ea-7b627343b29d\") " pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.734125 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.737837 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.760964 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-msz56" podStartSLOduration=16.760943209 podStartE2EDuration="16.760943209s" podCreationTimestamp="2025-11-28 17:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:09.759833568 +0000 UTC m=+171.808754463" watchObservedRunningTime="2025-11-28 17:01:09.760943209 +0000 UTC m=+171.809864114" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.767681 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqhx8\" (UniqueName: \"kubernetes.io/projected/f10908eb-32ed-4e49-b1ea-7b627343b29d-kube-api-access-cqhx8\") pod \"certified-operators-j64mb\" (UID: \"f10908eb-32ed-4e49-b1ea-7b627343b29d\") " pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.778114 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce7e31f1-782f-4618-9400-4049b5d50ae5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ce7e31f1-782f-4618-9400-4049b5d50ae5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.788954 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.801365 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.814785 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-utilities\") pod \"community-operators-2wdtz\" (UID: \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\") " pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.814088 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-utilities\") pod \"community-operators-2wdtz\" (UID: \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\") " pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.815305 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-catalog-content\") pod \"community-operators-2wdtz\" (UID: \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\") " pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.815605 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcztk\" (UniqueName: \"kubernetes.io/projected/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-kube-api-access-vcztk\") pod \"community-operators-2wdtz\" (UID: \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\") " pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.815681 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-catalog-content\") pod \"community-operators-2wdtz\" (UID: \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\") " pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.874831 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcztk\" (UniqueName: \"kubernetes.io/projected/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-kube-api-access-vcztk\") pod \"community-operators-2wdtz\" (UID: \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\") " pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.913169 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.917082 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.936746 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.993027 5024 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 17:01:09 crc kubenswrapper[5024]: I1128 17:01:09.993128 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.112116 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.162763 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n4vqb\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.250330 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.546101 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:10 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:10 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:10 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.546203 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.572944 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.576680 5024 patch_prober.go:28] interesting pod/apiserver-76f77b778f-6jk4g container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 28 17:01:10 crc kubenswrapper[5024]: [+]log ok Nov 28 17:01:10 crc kubenswrapper[5024]: [+]etcd ok Nov 28 17:01:10 crc kubenswrapper[5024]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 28 17:01:10 crc kubenswrapper[5024]: [+]poststarthook/generic-apiserver-start-informers ok Nov 28 17:01:10 crc kubenswrapper[5024]: [+]poststarthook/max-in-flight-filter ok Nov 28 17:01:10 crc kubenswrapper[5024]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 28 17:01:10 crc kubenswrapper[5024]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 28 17:01:10 crc kubenswrapper[5024]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 28 17:01:10 crc kubenswrapper[5024]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 28 17:01:10 crc kubenswrapper[5024]: [+]poststarthook/project.openshift.io-projectcache ok Nov 28 17:01:10 crc kubenswrapper[5024]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 28 17:01:10 crc kubenswrapper[5024]: [+]poststarthook/openshift.io-startinformers ok Nov 28 17:01:10 crc kubenswrapper[5024]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 28 17:01:10 crc kubenswrapper[5024]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 28 17:01:10 crc kubenswrapper[5024]: livez check failed Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.576746 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" podUID="ed40ac73-afc2-4dae-9364-e6775923e031" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.681432 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v9pk8" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.712291 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zl4ft"] Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.715261 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.720178 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.730947 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zl4ft"] Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.748390 5024 generic.go:334] "Generic (PLEG): container finished" podID="fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a" containerID="763f439d1b9a70e804ea009d13e823966fef4de6bd0f6ff7e2831fba5e990d9c" exitCode=0 Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.748741 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" event={"ID":"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a","Type":"ContainerDied","Data":"763f439d1b9a70e804ea009d13e823966fef4de6bd0f6ff7e2831fba5e990d9c"} Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.753801 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"af94043b-2d01-4b6c-b384-af1a3b65ffba","Type":"ContainerStarted","Data":"04f1e433fe82b989fee01d09bd94edbedc5f32f281797f153e1f84e139363d89"} Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.798049 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s84v4\" (UniqueName: \"kubernetes.io/projected/81188cf2-b85a-46bb-baf2-cda9e211eda7-kube-api-access-s84v4\") pod \"redhat-marketplace-zl4ft\" (UID: \"81188cf2-b85a-46bb-baf2-cda9e211eda7\") " pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.805618 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81188cf2-b85a-46bb-baf2-cda9e211eda7-catalog-content\") pod \"redhat-marketplace-zl4ft\" (UID: \"81188cf2-b85a-46bb-baf2-cda9e211eda7\") " pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.805664 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81188cf2-b85a-46bb-baf2-cda9e211eda7-utilities\") pod \"redhat-marketplace-zl4ft\" (UID: \"81188cf2-b85a-46bb-baf2-cda9e211eda7\") " pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.853884 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.869232 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qx48m" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.907631 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s84v4\" (UniqueName: \"kubernetes.io/projected/81188cf2-b85a-46bb-baf2-cda9e211eda7-kube-api-access-s84v4\") pod \"redhat-marketplace-zl4ft\" (UID: \"81188cf2-b85a-46bb-baf2-cda9e211eda7\") " pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.907899 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81188cf2-b85a-46bb-baf2-cda9e211eda7-catalog-content\") pod \"redhat-marketplace-zl4ft\" (UID: \"81188cf2-b85a-46bb-baf2-cda9e211eda7\") " pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.907944 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81188cf2-b85a-46bb-baf2-cda9e211eda7-utilities\") pod \"redhat-marketplace-zl4ft\" (UID: \"81188cf2-b85a-46bb-baf2-cda9e211eda7\") " pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.908602 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81188cf2-b85a-46bb-baf2-cda9e211eda7-utilities\") pod \"redhat-marketplace-zl4ft\" (UID: \"81188cf2-b85a-46bb-baf2-cda9e211eda7\") " pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.910888 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81188cf2-b85a-46bb-baf2-cda9e211eda7-catalog-content\") pod \"redhat-marketplace-zl4ft\" (UID: \"81188cf2-b85a-46bb-baf2-cda9e211eda7\") " pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:01:10 crc kubenswrapper[5024]: I1128 17:01:10.980139 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s84v4\" (UniqueName: \"kubernetes.io/projected/81188cf2-b85a-46bb-baf2-cda9e211eda7-kube-api-access-s84v4\") pod \"redhat-marketplace-zl4ft\" (UID: \"81188cf2-b85a-46bb-baf2-cda9e211eda7\") " pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.055803 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kx8x6"] Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.062129 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.078149 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j64mb"] Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.088371 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.094432 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gdgdt"] Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.099011 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.113510 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/542f05d2-a977-40de-887d-bc3538393234-catalog-content\") pod \"redhat-marketplace-gdgdt\" (UID: \"542f05d2-a977-40de-887d-bc3538393234\") " pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.113573 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lnzn\" (UniqueName: \"kubernetes.io/projected/542f05d2-a977-40de-887d-bc3538393234-kube-api-access-8lnzn\") pod \"redhat-marketplace-gdgdt\" (UID: \"542f05d2-a977-40de-887d-bc3538393234\") " pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.113674 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/542f05d2-a977-40de-887d-bc3538393234-utilities\") pod \"redhat-marketplace-gdgdt\" (UID: \"542f05d2-a977-40de-887d-bc3538393234\") " pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.117200 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gdgdt"] Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.198512 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rc8qm"] Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.214537 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/542f05d2-a977-40de-887d-bc3538393234-utilities\") pod \"redhat-marketplace-gdgdt\" (UID: \"542f05d2-a977-40de-887d-bc3538393234\") " pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.214668 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/542f05d2-a977-40de-887d-bc3538393234-catalog-content\") pod \"redhat-marketplace-gdgdt\" (UID: \"542f05d2-a977-40de-887d-bc3538393234\") " pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.214743 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lnzn\" (UniqueName: \"kubernetes.io/projected/542f05d2-a977-40de-887d-bc3538393234-kube-api-access-8lnzn\") pod \"redhat-marketplace-gdgdt\" (UID: \"542f05d2-a977-40de-887d-bc3538393234\") " pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.215999 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/542f05d2-a977-40de-887d-bc3538393234-utilities\") pod \"redhat-marketplace-gdgdt\" (UID: \"542f05d2-a977-40de-887d-bc3538393234\") " pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.216289 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/542f05d2-a977-40de-887d-bc3538393234-catalog-content\") pod \"redhat-marketplace-gdgdt\" (UID: \"542f05d2-a977-40de-887d-bc3538393234\") " pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.256772 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lnzn\" (UniqueName: \"kubernetes.io/projected/542f05d2-a977-40de-887d-bc3538393234-kube-api-access-8lnzn\") pod \"redhat-marketplace-gdgdt\" (UID: \"542f05d2-a977-40de-887d-bc3538393234\") " pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.414264 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2wdtz"] Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.423550 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:11 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:11 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:11 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.423602 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.433067 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n4vqb"] Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.464296 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.670056 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9jlxs" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.681042 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pnzzt"] Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.682629 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.684582 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.701507 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pnzzt"] Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.705333 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zl4ft"] Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.726163 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/610e20bb-07aa-46c2-9f83-1711f9133ad0-catalog-content\") pod \"redhat-operators-pnzzt\" (UID: \"610e20bb-07aa-46c2-9f83-1711f9133ad0\") " pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.726258 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/610e20bb-07aa-46c2-9f83-1711f9133ad0-utilities\") pod \"redhat-operators-pnzzt\" (UID: \"610e20bb-07aa-46c2-9f83-1711f9133ad0\") " pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.726309 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j58b\" (UniqueName: \"kubernetes.io/projected/610e20bb-07aa-46c2-9f83-1711f9133ad0-kube-api-access-5j58b\") pod \"redhat-operators-pnzzt\" (UID: \"610e20bb-07aa-46c2-9f83-1711f9133ad0\") " pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.762400 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc8qm" event={"ID":"8fae0fa8-8183-4e44-afed-63a655dd82c5","Type":"ContainerStarted","Data":"4136ba2fb5cf112764d83b79cf05e66f112861703f3e18839888fb3c480e9e71"} Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.763745 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ce7e31f1-782f-4618-9400-4049b5d50ae5","Type":"ContainerStarted","Data":"f2d3f27ff826546f32165a9e57b5cf5145413ed151d3512b8ac7793b8363c748"} Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.765488 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kx8x6" event={"ID":"2a0db523-f690-4c23-8324-b417a8ccd4b2","Type":"ContainerStarted","Data":"680dd644bf1cd91ee773fc214e508c02ac7e124dbdaf37b54ac6000094b3ce48"} Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.766870 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j64mb" event={"ID":"f10908eb-32ed-4e49-b1ea-7b627343b29d","Type":"ContainerStarted","Data":"dfa58333ccd22c2e8ea74de83ea0bc11b91667480a61d84ff739558a0ba0bb0b"} Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.827617 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/610e20bb-07aa-46c2-9f83-1711f9133ad0-catalog-content\") pod \"redhat-operators-pnzzt\" (UID: \"610e20bb-07aa-46c2-9f83-1711f9133ad0\") " pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.827960 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/610e20bb-07aa-46c2-9f83-1711f9133ad0-utilities\") pod \"redhat-operators-pnzzt\" (UID: \"610e20bb-07aa-46c2-9f83-1711f9133ad0\") " pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.828101 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j58b\" (UniqueName: \"kubernetes.io/projected/610e20bb-07aa-46c2-9f83-1711f9133ad0-kube-api-access-5j58b\") pod \"redhat-operators-pnzzt\" (UID: \"610e20bb-07aa-46c2-9f83-1711f9133ad0\") " pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.828371 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/610e20bb-07aa-46c2-9f83-1711f9133ad0-catalog-content\") pod \"redhat-operators-pnzzt\" (UID: \"610e20bb-07aa-46c2-9f83-1711f9133ad0\") " pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.829154 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/610e20bb-07aa-46c2-9f83-1711f9133ad0-utilities\") pod \"redhat-operators-pnzzt\" (UID: \"610e20bb-07aa-46c2-9f83-1711f9133ad0\") " pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:01:11 crc kubenswrapper[5024]: I1128 17:01:11.854574 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j58b\" (UniqueName: \"kubernetes.io/projected/610e20bb-07aa-46c2-9f83-1711f9133ad0-kube-api-access-5j58b\") pod \"redhat-operators-pnzzt\" (UID: \"610e20bb-07aa-46c2-9f83-1711f9133ad0\") " pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.001285 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.091524 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lqfjv"] Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.092870 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.109342 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lqfjv"] Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.156111 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-catalog-content\") pod \"redhat-operators-lqfjv\" (UID: \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\") " pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.156300 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-utilities\") pod \"redhat-operators-lqfjv\" (UID: \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\") " pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.156532 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrz5q\" (UniqueName: \"kubernetes.io/projected/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-kube-api-access-rrz5q\") pod \"redhat-operators-lqfjv\" (UID: \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\") " pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.269743 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-catalog-content\") pod \"redhat-operators-lqfjv\" (UID: \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\") " pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.270153 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-utilities\") pod \"redhat-operators-lqfjv\" (UID: \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\") " pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.270190 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrz5q\" (UniqueName: \"kubernetes.io/projected/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-kube-api-access-rrz5q\") pod \"redhat-operators-lqfjv\" (UID: \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\") " pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.276007 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-catalog-content\") pod \"redhat-operators-lqfjv\" (UID: \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\") " pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.276753 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-utilities\") pod \"redhat-operators-lqfjv\" (UID: \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\") " pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.290864 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.298731 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrz5q\" (UniqueName: \"kubernetes.io/projected/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-kube-api-access-rrz5q\") pod \"redhat-operators-lqfjv\" (UID: \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\") " pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.331559 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.480058 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-secret-volume\") pod \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\" (UID: \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\") " Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.480138 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46lrf\" (UniqueName: \"kubernetes.io/projected/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-kube-api-access-46lrf\") pod \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\" (UID: \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\") " Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.480220 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-config-volume\") pod \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\" (UID: \"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a\") " Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.507395 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-config-volume" (OuterVolumeSpecName: "config-volume") pod "fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a" (UID: "fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.535195 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:12 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:12 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:12 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.535271 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.600047 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.667162 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a" (UID: "fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.677161 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-kube-api-access-46lrf" (OuterVolumeSpecName: "kube-api-access-46lrf") pod "fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a" (UID: "fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a"). InnerVolumeSpecName "kube-api-access-46lrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.708154 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.708209 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46lrf\" (UniqueName: \"kubernetes.io/projected/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a-kube-api-access-46lrf\") on node \"crc\" DevicePath \"\"" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.802583 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"af94043b-2d01-4b6c-b384-af1a3b65ffba","Type":"ContainerStarted","Data":"bc3dcc59442b46c69f38c39775113b2f7c0f2d97b12e84289f3f05fe18f654e9"} Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.806833 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wdtz" event={"ID":"3ef7db62-d78a-4b3d-bb51-c7a2a434d735","Type":"ContainerStarted","Data":"b8a6889521fd5c9f322d2c101db5e836428bdd6d68cab461517d811fe68a1214"} Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.806886 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wdtz" event={"ID":"3ef7db62-d78a-4b3d-bb51-c7a2a434d735","Type":"ContainerStarted","Data":"7ddae49d83bc1611c3561f9d4f8f513d51b107089a5cfe21e95831394128b7fb"} Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.809261 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc8qm" event={"ID":"8fae0fa8-8183-4e44-afed-63a655dd82c5","Type":"ContainerStarted","Data":"1c6c2081769d4df2058cc74ea0fb949d0c6bc9f92ae1981b8856303bc27a338a"} Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.812123 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" event={"ID":"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931","Type":"ContainerStarted","Data":"93d5395ae0a021e47f82b74f0c3b62f9e3ea6ddc08a8fce0d936a17c591fbcc1"} Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.812179 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" event={"ID":"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931","Type":"ContainerStarted","Data":"21268923ae5624dfbd4279b8a8cf2458b2301fafd025cc1aec18153eeecc507c"} Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.812405 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.825257 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kx8x6" event={"ID":"2a0db523-f690-4c23-8324-b417a8ccd4b2","Type":"ContainerStarted","Data":"45ed1b8d7583e4a799482dc6d4592468658cab8815404474a6558d7dfb6ab016"} Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.847348 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zl4ft" event={"ID":"81188cf2-b85a-46bb-baf2-cda9e211eda7","Type":"ContainerStarted","Data":"772ee4011d88ab3d6b37bc7ec062ab7c8b5ce2215a5b65c32d6ac92abc75d662"} Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.847409 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zl4ft" event={"ID":"81188cf2-b85a-46bb-baf2-cda9e211eda7","Type":"ContainerStarted","Data":"30f8a80048b44a1cee48a71d91e02c1465004595417932fd1191a9a2ceaaeefe"} Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.850984 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ce7e31f1-782f-4618-9400-4049b5d50ae5","Type":"ContainerStarted","Data":"08d3e6d31e8699170a7d90158b5280e39b2ccb1ce571ffc96846237a21e0294f"} Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.854325 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.868187 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" event={"ID":"fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a","Type":"ContainerDied","Data":"7442ee939586d779a31f3c6be3650dfdcd22531483388d7e18b91486f1a17fca"} Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.868266 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7442ee939586d779a31f3c6be3650dfdcd22531483388d7e18b91486f1a17fca" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.868377 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.873149 5024 generic.go:334] "Generic (PLEG): container finished" podID="f10908eb-32ed-4e49-b1ea-7b627343b29d" containerID="5c7710a9b13e3a8575b38617de497d1605c0c70a9bd6b56c90990b4baa77750b" exitCode=0 Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.873220 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j64mb" event={"ID":"f10908eb-32ed-4e49-b1ea-7b627343b29d","Type":"ContainerDied","Data":"5c7710a9b13e3a8575b38617de497d1605c0c70a9bd6b56c90990b4baa77750b"} Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.878310 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=6.878281693 podStartE2EDuration="6.878281693s" podCreationTimestamp="2025-11-28 17:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.840471617 +0000 UTC m=+174.889392522" watchObservedRunningTime="2025-11-28 17:01:12.878281693 +0000 UTC m=+174.927202598" Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.884585 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pnzzt"] Nov 28 17:01:12 crc kubenswrapper[5024]: I1128 17:01:12.897006 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" podStartSLOduration=153.896982275 podStartE2EDuration="2m33.896982275s" podCreationTimestamp="2025-11-28 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.895163743 +0000 UTC m=+174.944084648" watchObservedRunningTime="2025-11-28 17:01:12.896982275 +0000 UTC m=+174.945903180" Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.101079 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=5.101057822 podStartE2EDuration="5.101057822s" podCreationTimestamp="2025-11-28 17:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:13.000919862 +0000 UTC m=+175.049840767" watchObservedRunningTime="2025-11-28 17:01:13.101057822 +0000 UTC m=+175.149978727" Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.115271 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gdgdt"] Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.191750 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lqfjv"] Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.425978 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:13 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:13 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:13 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.426576 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.880829 5024 generic.go:334] "Generic (PLEG): container finished" podID="542f05d2-a977-40de-887d-bc3538393234" containerID="1a353a52126d6925fe13cd4f5603a434cb5d2546a9c150bf334f20a99863ac86" exitCode=0 Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.880968 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gdgdt" event={"ID":"542f05d2-a977-40de-887d-bc3538393234","Type":"ContainerDied","Data":"1a353a52126d6925fe13cd4f5603a434cb5d2546a9c150bf334f20a99863ac86"} Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.881399 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gdgdt" event={"ID":"542f05d2-a977-40de-887d-bc3538393234","Type":"ContainerStarted","Data":"f72d3bb0a8135c5131e06a294577fb5031fb9fe14ed2b4b940c9813bfdb6cebd"} Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.885457 5024 generic.go:334] "Generic (PLEG): container finished" podID="610e20bb-07aa-46c2-9f83-1711f9133ad0" containerID="281a5f1d4c03eae62a05bd1c36fe16b4413b3e7ed6f62f0ed3bca9859e6c7a06" exitCode=0 Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.885545 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnzzt" event={"ID":"610e20bb-07aa-46c2-9f83-1711f9133ad0","Type":"ContainerDied","Data":"281a5f1d4c03eae62a05bd1c36fe16b4413b3e7ed6f62f0ed3bca9859e6c7a06"} Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.885581 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnzzt" event={"ID":"610e20bb-07aa-46c2-9f83-1711f9133ad0","Type":"ContainerStarted","Data":"d2f3f28c214d5b081e933cc23c3e66fca212d759f26b2343f4bb1e3d20dd2b25"} Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.889371 5024 generic.go:334] "Generic (PLEG): container finished" podID="3ef7db62-d78a-4b3d-bb51-c7a2a434d735" containerID="b8a6889521fd5c9f322d2c101db5e836428bdd6d68cab461517d811fe68a1214" exitCode=0 Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.889417 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wdtz" event={"ID":"3ef7db62-d78a-4b3d-bb51-c7a2a434d735","Type":"ContainerDied","Data":"b8a6889521fd5c9f322d2c101db5e836428bdd6d68cab461517d811fe68a1214"} Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.891987 5024 generic.go:334] "Generic (PLEG): container finished" podID="8fae0fa8-8183-4e44-afed-63a655dd82c5" containerID="1c6c2081769d4df2058cc74ea0fb949d0c6bc9f92ae1981b8856303bc27a338a" exitCode=0 Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.892070 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc8qm" event={"ID":"8fae0fa8-8183-4e44-afed-63a655dd82c5","Type":"ContainerDied","Data":"1c6c2081769d4df2058cc74ea0fb949d0c6bc9f92ae1981b8856303bc27a338a"} Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.896051 5024 generic.go:334] "Generic (PLEG): container finished" podID="ce7e31f1-782f-4618-9400-4049b5d50ae5" containerID="08d3e6d31e8699170a7d90158b5280e39b2ccb1ce571ffc96846237a21e0294f" exitCode=0 Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.896157 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ce7e31f1-782f-4618-9400-4049b5d50ae5","Type":"ContainerDied","Data":"08d3e6d31e8699170a7d90158b5280e39b2ccb1ce571ffc96846237a21e0294f"} Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.898319 5024 generic.go:334] "Generic (PLEG): container finished" podID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" containerID="9c03dbf91b90eac91de49cd007a68a0467f48a17ecf571bec277eb276410aa3a" exitCode=0 Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.898420 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqfjv" event={"ID":"1587b87d-29af-4f60-a14f-d5e1dff6f5f2","Type":"ContainerDied","Data":"9c03dbf91b90eac91de49cd007a68a0467f48a17ecf571bec277eb276410aa3a"} Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.898490 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqfjv" event={"ID":"1587b87d-29af-4f60-a14f-d5e1dff6f5f2","Type":"ContainerStarted","Data":"4bced54f3dd6b6c3d898d60dd4dd13d0d5216ecf6c15e33c639f9b2ed60feef8"} Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.900548 5024 generic.go:334] "Generic (PLEG): container finished" podID="2a0db523-f690-4c23-8324-b417a8ccd4b2" containerID="45ed1b8d7583e4a799482dc6d4592468658cab8815404474a6558d7dfb6ab016" exitCode=0 Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.900625 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kx8x6" event={"ID":"2a0db523-f690-4c23-8324-b417a8ccd4b2","Type":"ContainerDied","Data":"45ed1b8d7583e4a799482dc6d4592468658cab8815404474a6558d7dfb6ab016"} Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.903924 5024 generic.go:334] "Generic (PLEG): container finished" podID="81188cf2-b85a-46bb-baf2-cda9e211eda7" containerID="772ee4011d88ab3d6b37bc7ec062ab7c8b5ce2215a5b65c32d6ac92abc75d662" exitCode=0 Nov 28 17:01:13 crc kubenswrapper[5024]: I1128 17:01:13.905047 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zl4ft" event={"ID":"81188cf2-b85a-46bb-baf2-cda9e211eda7","Type":"ContainerDied","Data":"772ee4011d88ab3d6b37bc7ec062ab7c8b5ce2215a5b65c32d6ac92abc75d662"} Nov 28 17:01:14 crc kubenswrapper[5024]: I1128 17:01:14.426520 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:14 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:14 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:14 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:14 crc kubenswrapper[5024]: I1128 17:01:14.426623 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:15 crc kubenswrapper[5024]: I1128 17:01:15.057785 5024 generic.go:334] "Generic (PLEG): container finished" podID="af94043b-2d01-4b6c-b384-af1a3b65ffba" containerID="bc3dcc59442b46c69f38c39775113b2f7c0f2d97b12e84289f3f05fe18f654e9" exitCode=0 Nov 28 17:01:15 crc kubenswrapper[5024]: I1128 17:01:15.057872 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"af94043b-2d01-4b6c-b384-af1a3b65ffba","Type":"ContainerDied","Data":"bc3dcc59442b46c69f38c39775113b2f7c0f2d97b12e84289f3f05fe18f654e9"} Nov 28 17:01:15 crc kubenswrapper[5024]: I1128 17:01:15.468541 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:15 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:15 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:15 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:15 crc kubenswrapper[5024]: I1128 17:01:15.468613 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:15 crc kubenswrapper[5024]: I1128 17:01:15.596353 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:01:15 crc kubenswrapper[5024]: I1128 17:01:15.596430 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:01:15 crc kubenswrapper[5024]: I1128 17:01:15.596462 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:01:15 crc kubenswrapper[5024]: I1128 17:01:15.596563 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:01:15 crc kubenswrapper[5024]: I1128 17:01:15.596639 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:01:15 crc kubenswrapper[5024]: I1128 17:01:15.603131 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-6jk4g" Nov 28 17:01:15 crc kubenswrapper[5024]: I1128 17:01:15.695070 5024 patch_prober.go:28] interesting pod/console-f9d7485db-r7n7g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 28 17:01:15 crc kubenswrapper[5024]: I1128 17:01:15.695153 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-r7n7g" podUID="f84f4343-2000-4b50-9650-22953ca7d39d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 28 17:01:15 crc kubenswrapper[5024]: I1128 17:01:15.839163 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.064454 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce7e31f1-782f-4618-9400-4049b5d50ae5-kube-api-access\") pod \"ce7e31f1-782f-4618-9400-4049b5d50ae5\" (UID: \"ce7e31f1-782f-4618-9400-4049b5d50ae5\") " Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.064541 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce7e31f1-782f-4618-9400-4049b5d50ae5-kubelet-dir\") pod \"ce7e31f1-782f-4618-9400-4049b5d50ae5\" (UID: \"ce7e31f1-782f-4618-9400-4049b5d50ae5\") " Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.065077 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce7e31f1-782f-4618-9400-4049b5d50ae5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ce7e31f1-782f-4618-9400-4049b5d50ae5" (UID: "ce7e31f1-782f-4618-9400-4049b5d50ae5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.106888 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.113817 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ce7e31f1-782f-4618-9400-4049b5d50ae5","Type":"ContainerDied","Data":"f2d3f27ff826546f32165a9e57b5cf5145413ed151d3512b8ac7793b8363c748"} Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.113930 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2d3f27ff826546f32165a9e57b5cf5145413ed151d3512b8ac7793b8363c748" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.123238 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7e31f1-782f-4618-9400-4049b5d50ae5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ce7e31f1-782f-4618-9400-4049b5d50ae5" (UID: "ce7e31f1-782f-4618-9400-4049b5d50ae5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.168518 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce7e31f1-782f-4618-9400-4049b5d50ae5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.168562 5024 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce7e31f1-782f-4618-9400-4049b5d50ae5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.397378 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-vj7pt" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.407531 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.422009 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:16 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:16 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:16 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.422090 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.457431 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.473402 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af94043b-2d01-4b6c-b384-af1a3b65ffba-kube-api-access\") pod \"af94043b-2d01-4b6c-b384-af1a3b65ffba\" (UID: \"af94043b-2d01-4b6c-b384-af1a3b65ffba\") " Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.473520 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af94043b-2d01-4b6c-b384-af1a3b65ffba-kubelet-dir\") pod \"af94043b-2d01-4b6c-b384-af1a3b65ffba\" (UID: \"af94043b-2d01-4b6c-b384-af1a3b65ffba\") " Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.474852 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af94043b-2d01-4b6c-b384-af1a3b65ffba-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "af94043b-2d01-4b6c-b384-af1a3b65ffba" (UID: "af94043b-2d01-4b6c-b384-af1a3b65ffba"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.492347 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af94043b-2d01-4b6c-b384-af1a3b65ffba-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "af94043b-2d01-4b6c-b384-af1a3b65ffba" (UID: "af94043b-2d01-4b6c-b384-af1a3b65ffba"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.577621 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af94043b-2d01-4b6c-b384-af1a3b65ffba-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.577662 5024 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af94043b-2d01-4b6c-b384-af1a3b65ffba-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:01:16 crc kubenswrapper[5024]: I1128 17:01:16.923408 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:01:17 crc kubenswrapper[5024]: I1128 17:01:17.336261 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"af94043b-2d01-4b6c-b384-af1a3b65ffba","Type":"ContainerDied","Data":"04f1e433fe82b989fee01d09bd94edbedc5f32f281797f153e1f84e139363d89"} Nov 28 17:01:17 crc kubenswrapper[5024]: I1128 17:01:17.336334 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04f1e433fe82b989fee01d09bd94edbedc5f32f281797f153e1f84e139363d89" Nov 28 17:01:17 crc kubenswrapper[5024]: I1128 17:01:17.336422 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:17 crc kubenswrapper[5024]: I1128 17:01:17.488205 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:17 crc kubenswrapper[5024]: [-]has-synced failed: reason withheld Nov 28 17:01:17 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:17 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:17 crc kubenswrapper[5024]: I1128 17:01:17.488270 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:18 crc kubenswrapper[5024]: I1128 17:01:18.430006 5024 patch_prober.go:28] interesting pod/router-default-5444994796-b2t9m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:18 crc kubenswrapper[5024]: [+]has-synced ok Nov 28 17:01:18 crc kubenswrapper[5024]: [+]process-running ok Nov 28 17:01:18 crc kubenswrapper[5024]: healthz check failed Nov 28 17:01:18 crc kubenswrapper[5024]: I1128 17:01:18.430491 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b2t9m" podUID="7b08a2e9-f0f2-4749-9728-941815d60da9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:19 crc kubenswrapper[5024]: I1128 17:01:19.423713 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:01:19 crc kubenswrapper[5024]: I1128 17:01:19.427341 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-b2t9m" Nov 28 17:01:25 crc kubenswrapper[5024]: I1128 17:01:25.584868 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:01:25 crc kubenswrapper[5024]: I1128 17:01:25.585222 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:01:25 crc kubenswrapper[5024]: I1128 17:01:25.584997 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:01:25 crc kubenswrapper[5024]: I1128 17:01:25.585313 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-jvvpl" Nov 28 17:01:25 crc kubenswrapper[5024]: I1128 17:01:25.585324 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:01:25 crc kubenswrapper[5024]: I1128 17:01:25.586029 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"cce0a63c9579734d99bd07bb10df9fdd41f4c8591a49ef30b5323ae311947484"} pod="openshift-console/downloads-7954f5f757-jvvpl" containerMessage="Container download-server failed liveness probe, will be restarted" Nov 28 17:01:25 crc kubenswrapper[5024]: I1128 17:01:25.586142 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" containerID="cri-o://cce0a63c9579734d99bd07bb10df9fdd41f4c8591a49ef30b5323ae311947484" gracePeriod=2 Nov 28 17:01:25 crc kubenswrapper[5024]: I1128 17:01:25.586236 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:01:25 crc kubenswrapper[5024]: I1128 17:01:25.586267 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:01:25 crc kubenswrapper[5024]: I1128 17:01:25.684435 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:01:25 crc kubenswrapper[5024]: I1128 17:01:25.688398 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:01:26 crc kubenswrapper[5024]: I1128 17:01:26.693778 5024 generic.go:334] "Generic (PLEG): container finished" podID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerID="cce0a63c9579734d99bd07bb10df9fdd41f4c8591a49ef30b5323ae311947484" exitCode=0 Nov 28 17:01:26 crc kubenswrapper[5024]: I1128 17:01:26.694987 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jvvpl" event={"ID":"fb6a1824-13a4-427f-b277-c41045a8ad45","Type":"ContainerDied","Data":"cce0a63c9579734d99bd07bb10df9fdd41f4c8591a49ef30b5323ae311947484"} Nov 28 17:01:28 crc kubenswrapper[5024]: I1128 17:01:28.000142 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:01:30 crc kubenswrapper[5024]: I1128 17:01:30.285830 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:01:35 crc kubenswrapper[5024]: I1128 17:01:35.586850 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:01:35 crc kubenswrapper[5024]: I1128 17:01:35.587374 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:01:36 crc kubenswrapper[5024]: I1128 17:01:36.927187 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kzncc" Nov 28 17:01:37 crc kubenswrapper[5024]: I1128 17:01:37.565559 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:01:37 crc kubenswrapper[5024]: I1128 17:01:37.565657 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:01:45 crc kubenswrapper[5024]: I1128 17:01:45.585002 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:01:45 crc kubenswrapper[5024]: I1128 17:01:45.585597 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.475516 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 17:01:49 crc kubenswrapper[5024]: E1128 17:01:49.476052 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af94043b-2d01-4b6c-b384-af1a3b65ffba" containerName="pruner" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.476077 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="af94043b-2d01-4b6c-b384-af1a3b65ffba" containerName="pruner" Nov 28 17:01:49 crc kubenswrapper[5024]: E1128 17:01:49.476100 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7e31f1-782f-4618-9400-4049b5d50ae5" containerName="pruner" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.476109 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7e31f1-782f-4618-9400-4049b5d50ae5" containerName="pruner" Nov 28 17:01:49 crc kubenswrapper[5024]: E1128 17:01:49.476119 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a" containerName="collect-profiles" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.476127 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a" containerName="collect-profiles" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.476266 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7e31f1-782f-4618-9400-4049b5d50ae5" containerName="pruner" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.476288 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a" containerName="collect-profiles" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.476296 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="af94043b-2d01-4b6c-b384-af1a3b65ffba" containerName="pruner" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.476867 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.480638 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.480880 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.484255 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.578052 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80e15c22-8ae8-41b0-a8e4-ab8f153f0432-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"80e15c22-8ae8-41b0-a8e4-ab8f153f0432\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.578520 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80e15c22-8ae8-41b0-a8e4-ab8f153f0432-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"80e15c22-8ae8-41b0-a8e4-ab8f153f0432\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.680331 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80e15c22-8ae8-41b0-a8e4-ab8f153f0432-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"80e15c22-8ae8-41b0-a8e4-ab8f153f0432\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.680413 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80e15c22-8ae8-41b0-a8e4-ab8f153f0432-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"80e15c22-8ae8-41b0-a8e4-ab8f153f0432\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:49 crc kubenswrapper[5024]: I1128 17:01:49.680566 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80e15c22-8ae8-41b0-a8e4-ab8f153f0432-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"80e15c22-8ae8-41b0-a8e4-ab8f153f0432\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:50 crc kubenswrapper[5024]: I1128 17:01:50.084625 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80e15c22-8ae8-41b0-a8e4-ab8f153f0432-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"80e15c22-8ae8-41b0-a8e4-ab8f153f0432\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:50 crc kubenswrapper[5024]: I1128 17:01:50.109948 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:50 crc kubenswrapper[5024]: E1128 17:01:50.548349 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 17:01:50 crc kubenswrapper[5024]: E1128 17:01:50.548904 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lnzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gdgdt_openshift-marketplace(542f05d2-a977-40de-887d-bc3538393234): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:01:50 crc kubenswrapper[5024]: E1128 17:01:50.550177 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gdgdt" podUID="542f05d2-a977-40de-887d-bc3538393234" Nov 28 17:01:52 crc kubenswrapper[5024]: E1128 17:01:52.704251 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gdgdt" podUID="542f05d2-a977-40de-887d-bc3538393234" Nov 28 17:01:53 crc kubenswrapper[5024]: I1128 17:01:53.670813 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 17:01:53 crc kubenswrapper[5024]: I1128 17:01:53.671884 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:01:53 crc kubenswrapper[5024]: I1128 17:01:53.692403 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 17:01:53 crc kubenswrapper[5024]: I1128 17:01:53.790272 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4e663a3-b7d3-48f0-876c-8365348bc6ca-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:01:53 crc kubenswrapper[5024]: I1128 17:01:53.790334 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b4e663a3-b7d3-48f0-876c-8365348bc6ca-var-lock\") pod \"installer-9-crc\" (UID: \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:01:53 crc kubenswrapper[5024]: I1128 17:01:53.790361 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4e663a3-b7d3-48f0-876c-8365348bc6ca-kube-api-access\") pod \"installer-9-crc\" (UID: \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:01:53 crc kubenswrapper[5024]: I1128 17:01:53.891776 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b4e663a3-b7d3-48f0-876c-8365348bc6ca-var-lock\") pod \"installer-9-crc\" (UID: \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:01:53 crc kubenswrapper[5024]: I1128 17:01:53.892178 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4e663a3-b7d3-48f0-876c-8365348bc6ca-kube-api-access\") pod \"installer-9-crc\" (UID: \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:01:53 crc kubenswrapper[5024]: I1128 17:01:53.892096 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b4e663a3-b7d3-48f0-876c-8365348bc6ca-var-lock\") pod \"installer-9-crc\" (UID: \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:01:53 crc kubenswrapper[5024]: I1128 17:01:53.892384 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4e663a3-b7d3-48f0-876c-8365348bc6ca-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:01:53 crc kubenswrapper[5024]: I1128 17:01:53.892734 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4e663a3-b7d3-48f0-876c-8365348bc6ca-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:01:53 crc kubenswrapper[5024]: I1128 17:01:53.934341 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4e663a3-b7d3-48f0-876c-8365348bc6ca-kube-api-access\") pod \"installer-9-crc\" (UID: \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:01:54 crc kubenswrapper[5024]: I1128 17:01:54.008193 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:01:55 crc kubenswrapper[5024]: I1128 17:01:55.584997 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:01:55 crc kubenswrapper[5024]: I1128 17:01:55.585464 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:01:58 crc kubenswrapper[5024]: E1128 17:01:58.361253 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 17:01:58 crc kubenswrapper[5024]: E1128 17:01:58.361883 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5j58b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pnzzt_openshift-marketplace(610e20bb-07aa-46c2-9f83-1711f9133ad0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:01:58 crc kubenswrapper[5024]: E1128 17:01:58.363772 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pnzzt" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" Nov 28 17:01:58 crc kubenswrapper[5024]: E1128 17:01:58.455567 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 17:01:58 crc kubenswrapper[5024]: E1128 17:01:58.456208 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52d7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rc8qm_openshift-marketplace(8fae0fa8-8183-4e44-afed-63a655dd82c5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:01:58 crc kubenswrapper[5024]: E1128 17:01:58.457848 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rc8qm" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" Nov 28 17:01:58 crc kubenswrapper[5024]: E1128 17:01:58.504786 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 17:01:58 crc kubenswrapper[5024]: E1128 17:01:58.504963 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s84v4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zl4ft_openshift-marketplace(81188cf2-b85a-46bb-baf2-cda9e211eda7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:01:58 crc kubenswrapper[5024]: E1128 17:01:58.506040 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-zl4ft" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" Nov 28 17:01:58 crc kubenswrapper[5024]: E1128 17:01:58.519932 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 17:01:58 crc kubenswrapper[5024]: E1128 17:01:58.520199 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrz5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lqfjv_openshift-marketplace(1587b87d-29af-4f60-a14f-d5e1dff6f5f2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:01:58 crc kubenswrapper[5024]: E1128 17:01:58.521355 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lqfjv" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" Nov 28 17:02:00 crc kubenswrapper[5024]: E1128 17:02:00.072004 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lqfjv" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" Nov 28 17:02:00 crc kubenswrapper[5024]: E1128 17:02:00.072066 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zl4ft" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" Nov 28 17:02:00 crc kubenswrapper[5024]: E1128 17:02:00.072239 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rc8qm" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" Nov 28 17:02:00 crc kubenswrapper[5024]: E1128 17:02:00.073303 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pnzzt" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" Nov 28 17:02:00 crc kubenswrapper[5024]: E1128 17:02:00.153563 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 17:02:00 crc kubenswrapper[5024]: E1128 17:02:00.153978 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqhx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-j64mb_openshift-marketplace(f10908eb-32ed-4e49-b1ea-7b627343b29d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:02:00 crc kubenswrapper[5024]: E1128 17:02:00.155531 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-j64mb" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" Nov 28 17:02:00 crc kubenswrapper[5024]: E1128 17:02:00.169519 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 17:02:00 crc kubenswrapper[5024]: E1128 17:02:00.169742 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzhws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kx8x6_openshift-marketplace(2a0db523-f690-4c23-8324-b417a8ccd4b2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:02:00 crc kubenswrapper[5024]: E1128 17:02:00.171155 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kx8x6" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" Nov 28 17:02:00 crc kubenswrapper[5024]: I1128 17:02:00.340213 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 17:02:00 crc kubenswrapper[5024]: I1128 17:02:00.419573 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 17:02:00 crc kubenswrapper[5024]: W1128 17:02:00.429667 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb4e663a3_b7d3_48f0_876c_8365348bc6ca.slice/crio-804c8a5f50123b53457bce0764d6d8f6a9d7152666b7bd9595956f1a59c67e9a WatchSource:0}: Error finding container 804c8a5f50123b53457bce0764d6d8f6a9d7152666b7bd9595956f1a59c67e9a: Status 404 returned error can't find the container with id 804c8a5f50123b53457bce0764d6d8f6a9d7152666b7bd9595956f1a59c67e9a Nov 28 17:02:00 crc kubenswrapper[5024]: I1128 17:02:00.990577 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jvvpl" event={"ID":"fb6a1824-13a4-427f-b277-c41045a8ad45","Type":"ContainerStarted","Data":"baee02b3a8d468e3a40361a08f7044e09909f9a79f00a9bb1c173446036587b3"} Nov 28 17:02:00 crc kubenswrapper[5024]: I1128 17:02:00.990984 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jvvpl" Nov 28 17:02:00 crc kubenswrapper[5024]: I1128 17:02:00.991421 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:02:00 crc kubenswrapper[5024]: I1128 17:02:00.991476 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:02:00 crc kubenswrapper[5024]: I1128 17:02:00.992460 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"80e15c22-8ae8-41b0-a8e4-ab8f153f0432","Type":"ContainerStarted","Data":"77bd0aae920960d4392434bb372391562e1d0850e588386c72f3cb07e83a9a7e"} Nov 28 17:02:00 crc kubenswrapper[5024]: I1128 17:02:00.992498 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"80e15c22-8ae8-41b0-a8e4-ab8f153f0432","Type":"ContainerStarted","Data":"a5a25483a0d3398b0ad9023c260f256b3debeeadb37e91a211a85bfcc8e0d900"} Nov 28 17:02:00 crc kubenswrapper[5024]: I1128 17:02:00.994927 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wdtz" event={"ID":"3ef7db62-d78a-4b3d-bb51-c7a2a434d735","Type":"ContainerDied","Data":"06fd280eb08b53b2074eeb273c87546c68e0fb9b08802b6ccfa7f81716d3cd68"} Nov 28 17:02:00 crc kubenswrapper[5024]: I1128 17:02:00.994812 5024 generic.go:334] "Generic (PLEG): container finished" podID="3ef7db62-d78a-4b3d-bb51-c7a2a434d735" containerID="06fd280eb08b53b2074eeb273c87546c68e0fb9b08802b6ccfa7f81716d3cd68" exitCode=0 Nov 28 17:02:01 crc kubenswrapper[5024]: I1128 17:02:01.000141 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b4e663a3-b7d3-48f0-876c-8365348bc6ca","Type":"ContainerStarted","Data":"c6629e06455abb7e9c441ee39b1e3ad793856d0bb0d1714b52c1210863b8e0a7"} Nov 28 17:02:01 crc kubenswrapper[5024]: I1128 17:02:01.000186 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b4e663a3-b7d3-48f0-876c-8365348bc6ca","Type":"ContainerStarted","Data":"804c8a5f50123b53457bce0764d6d8f6a9d7152666b7bd9595956f1a59c67e9a"} Nov 28 17:02:01 crc kubenswrapper[5024]: E1128 17:02:01.002386 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-j64mb" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" Nov 28 17:02:01 crc kubenswrapper[5024]: E1128 17:02:01.002403 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kx8x6" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" Nov 28 17:02:01 crc kubenswrapper[5024]: I1128 17:02:01.078136 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=12.078109187 podStartE2EDuration="12.078109187s" podCreationTimestamp="2025-11-28 17:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:02:01.071105636 +0000 UTC m=+223.120026551" watchObservedRunningTime="2025-11-28 17:02:01.078109187 +0000 UTC m=+223.127030092" Nov 28 17:02:01 crc kubenswrapper[5024]: I1128 17:02:01.094669 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=8.094642262 podStartE2EDuration="8.094642262s" podCreationTimestamp="2025-11-28 17:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:02:01.093461539 +0000 UTC m=+223.142382444" watchObservedRunningTime="2025-11-28 17:02:01.094642262 +0000 UTC m=+223.143563167" Nov 28 17:02:02 crc kubenswrapper[5024]: I1128 17:02:02.008549 5024 generic.go:334] "Generic (PLEG): container finished" podID="80e15c22-8ae8-41b0-a8e4-ab8f153f0432" containerID="77bd0aae920960d4392434bb372391562e1d0850e588386c72f3cb07e83a9a7e" exitCode=0 Nov 28 17:02:02 crc kubenswrapper[5024]: I1128 17:02:02.008662 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"80e15c22-8ae8-41b0-a8e4-ab8f153f0432","Type":"ContainerDied","Data":"77bd0aae920960d4392434bb372391562e1d0850e588386c72f3cb07e83a9a7e"} Nov 28 17:02:02 crc kubenswrapper[5024]: I1128 17:02:02.011223 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wdtz" event={"ID":"3ef7db62-d78a-4b3d-bb51-c7a2a434d735","Type":"ContainerStarted","Data":"8c0dc02fc15f035e1021c7521149e47a36485db7f8f5abae5d4987f25a18f701"} Nov 28 17:02:02 crc kubenswrapper[5024]: I1128 17:02:02.011947 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:02:02 crc kubenswrapper[5024]: I1128 17:02:02.012007 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:02:02 crc kubenswrapper[5024]: I1128 17:02:02.055180 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2wdtz" podStartSLOduration=5.512157003 podStartE2EDuration="53.055148848s" podCreationTimestamp="2025-11-28 17:01:09 +0000 UTC" firstStartedPulling="2025-11-28 17:01:13.890516936 +0000 UTC m=+175.939437841" lastFinishedPulling="2025-11-28 17:02:01.433508781 +0000 UTC m=+223.482429686" observedRunningTime="2025-11-28 17:02:02.051184365 +0000 UTC m=+224.100105280" watchObservedRunningTime="2025-11-28 17:02:02.055148848 +0000 UTC m=+224.104069743" Nov 28 17:02:03 crc kubenswrapper[5024]: I1128 17:02:03.268540 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:02:03 crc kubenswrapper[5024]: I1128 17:02:03.452247 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80e15c22-8ae8-41b0-a8e4-ab8f153f0432-kube-api-access\") pod \"80e15c22-8ae8-41b0-a8e4-ab8f153f0432\" (UID: \"80e15c22-8ae8-41b0-a8e4-ab8f153f0432\") " Nov 28 17:02:03 crc kubenswrapper[5024]: I1128 17:02:03.452346 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80e15c22-8ae8-41b0-a8e4-ab8f153f0432-kubelet-dir\") pod \"80e15c22-8ae8-41b0-a8e4-ab8f153f0432\" (UID: \"80e15c22-8ae8-41b0-a8e4-ab8f153f0432\") " Nov 28 17:02:03 crc kubenswrapper[5024]: I1128 17:02:03.452472 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80e15c22-8ae8-41b0-a8e4-ab8f153f0432-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "80e15c22-8ae8-41b0-a8e4-ab8f153f0432" (UID: "80e15c22-8ae8-41b0-a8e4-ab8f153f0432"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:02:03 crc kubenswrapper[5024]: I1128 17:02:03.452650 5024 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80e15c22-8ae8-41b0-a8e4-ab8f153f0432-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:03 crc kubenswrapper[5024]: I1128 17:02:03.460311 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80e15c22-8ae8-41b0-a8e4-ab8f153f0432-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "80e15c22-8ae8-41b0-a8e4-ab8f153f0432" (UID: "80e15c22-8ae8-41b0-a8e4-ab8f153f0432"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:02:03 crc kubenswrapper[5024]: I1128 17:02:03.553731 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80e15c22-8ae8-41b0-a8e4-ab8f153f0432-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:04 crc kubenswrapper[5024]: I1128 17:02:04.023901 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"80e15c22-8ae8-41b0-a8e4-ab8f153f0432","Type":"ContainerDied","Data":"a5a25483a0d3398b0ad9023c260f256b3debeeadb37e91a211a85bfcc8e0d900"} Nov 28 17:02:04 crc kubenswrapper[5024]: I1128 17:02:04.023955 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5a25483a0d3398b0ad9023c260f256b3debeeadb37e91a211a85bfcc8e0d900" Nov 28 17:02:04 crc kubenswrapper[5024]: I1128 17:02:04.023966 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:02:05 crc kubenswrapper[5024]: I1128 17:02:05.584731 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:02:05 crc kubenswrapper[5024]: I1128 17:02:05.584787 5024 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvvpl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 17:02:05 crc kubenswrapper[5024]: I1128 17:02:05.584803 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:02:05 crc kubenswrapper[5024]: I1128 17:02:05.584849 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jvvpl" podUID="fb6a1824-13a4-427f-b277-c41045a8ad45" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 17:02:07 crc kubenswrapper[5024]: I1128 17:02:07.565859 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:02:07 crc kubenswrapper[5024]: I1128 17:02:07.566314 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:02:07 crc kubenswrapper[5024]: I1128 17:02:07.566390 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 17:02:07 crc kubenswrapper[5024]: I1128 17:02:07.567328 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:02:07 crc kubenswrapper[5024]: I1128 17:02:07.567394 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3" gracePeriod=600 Nov 28 17:02:10 crc kubenswrapper[5024]: I1128 17:02:10.112975 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:02:10 crc kubenswrapper[5024]: I1128 17:02:10.116049 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:02:13 crc kubenswrapper[5024]: I1128 17:02:13.389465 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:02:13 crc kubenswrapper[5024]: I1128 17:02:13.438656 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:02:14 crc kubenswrapper[5024]: I1128 17:02:14.036671 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3" exitCode=0 Nov 28 17:02:14 crc kubenswrapper[5024]: I1128 17:02:14.036983 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3"} Nov 28 17:02:15 crc kubenswrapper[5024]: I1128 17:02:15.055770 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"a5cfa405463e6da44c10e5aaed39d084534cafde9adb70808f0b8a54ca8b0cfc"} Nov 28 17:02:15 crc kubenswrapper[5024]: I1128 17:02:15.605820 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jvvpl" Nov 28 17:02:15 crc kubenswrapper[5024]: I1128 17:02:15.797814 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2wdtz"] Nov 28 17:02:16 crc kubenswrapper[5024]: I1128 17:02:16.060265 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2wdtz" podUID="3ef7db62-d78a-4b3d-bb51-c7a2a434d735" containerName="registry-server" containerID="cri-o://8c0dc02fc15f035e1021c7521149e47a36485db7f8f5abae5d4987f25a18f701" gracePeriod=2 Nov 28 17:02:17 crc kubenswrapper[5024]: I1128 17:02:17.226451 5024 generic.go:334] "Generic (PLEG): container finished" podID="3ef7db62-d78a-4b3d-bb51-c7a2a434d735" containerID="8c0dc02fc15f035e1021c7521149e47a36485db7f8f5abae5d4987f25a18f701" exitCode=0 Nov 28 17:02:17 crc kubenswrapper[5024]: I1128 17:02:17.226917 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wdtz" event={"ID":"3ef7db62-d78a-4b3d-bb51-c7a2a434d735","Type":"ContainerDied","Data":"8c0dc02fc15f035e1021c7521149e47a36485db7f8f5abae5d4987f25a18f701"} Nov 28 17:02:18 crc kubenswrapper[5024]: I1128 17:02:18.716839 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:02:18 crc kubenswrapper[5024]: I1128 17:02:18.844171 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-catalog-content\") pod \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\" (UID: \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\") " Nov 28 17:02:18 crc kubenswrapper[5024]: I1128 17:02:18.844313 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcztk\" (UniqueName: \"kubernetes.io/projected/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-kube-api-access-vcztk\") pod \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\" (UID: \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\") " Nov 28 17:02:18 crc kubenswrapper[5024]: I1128 17:02:18.844386 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-utilities\") pod \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\" (UID: \"3ef7db62-d78a-4b3d-bb51-c7a2a434d735\") " Nov 28 17:02:18 crc kubenswrapper[5024]: I1128 17:02:18.845452 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-utilities" (OuterVolumeSpecName: "utilities") pod "3ef7db62-d78a-4b3d-bb51-c7a2a434d735" (UID: "3ef7db62-d78a-4b3d-bb51-c7a2a434d735"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:18 crc kubenswrapper[5024]: I1128 17:02:18.851068 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-kube-api-access-vcztk" (OuterVolumeSpecName: "kube-api-access-vcztk") pod "3ef7db62-d78a-4b3d-bb51-c7a2a434d735" (UID: "3ef7db62-d78a-4b3d-bb51-c7a2a434d735"). InnerVolumeSpecName "kube-api-access-vcztk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:02:18 crc kubenswrapper[5024]: I1128 17:02:18.906823 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ef7db62-d78a-4b3d-bb51-c7a2a434d735" (UID: "3ef7db62-d78a-4b3d-bb51-c7a2a434d735"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:18 crc kubenswrapper[5024]: I1128 17:02:18.946335 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcztk\" (UniqueName: \"kubernetes.io/projected/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-kube-api-access-vcztk\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:18 crc kubenswrapper[5024]: I1128 17:02:18.946637 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:18 crc kubenswrapper[5024]: I1128 17:02:18.946649 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ef7db62-d78a-4b3d-bb51-c7a2a434d735-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:19 crc kubenswrapper[5024]: I1128 17:02:19.297850 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kx8x6" event={"ID":"2a0db523-f690-4c23-8324-b417a8ccd4b2","Type":"ContainerStarted","Data":"d70d80e64e2e18a34726389e29c66130c41a076b0ee21e580d4a56e26ca252a8"} Nov 28 17:02:19 crc kubenswrapper[5024]: I1128 17:02:19.327728 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zl4ft" event={"ID":"81188cf2-b85a-46bb-baf2-cda9e211eda7","Type":"ContainerStarted","Data":"986a0dde13359c340669624848d2074d35952a29feb574410e5db6055609cad0"} Nov 28 17:02:19 crc kubenswrapper[5024]: I1128 17:02:19.334103 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gdgdt" event={"ID":"542f05d2-a977-40de-887d-bc3538393234","Type":"ContainerStarted","Data":"12aa07233851b87dbf0bc559b438a71e5f26dfaf92b76d0703bbbe220083ef05"} Nov 28 17:02:19 crc kubenswrapper[5024]: I1128 17:02:19.336899 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnzzt" event={"ID":"610e20bb-07aa-46c2-9f83-1711f9133ad0","Type":"ContainerStarted","Data":"8c5a874bf5e6b493a652c8852e1c28eed91009d4dd659ad89ede384139fa110b"} Nov 28 17:02:19 crc kubenswrapper[5024]: I1128 17:02:19.355725 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j64mb" event={"ID":"f10908eb-32ed-4e49-b1ea-7b627343b29d","Type":"ContainerStarted","Data":"f99c89d30d47ae0260479d7a88fc8826c8ac67cf3effa3b0137593b2afdfb678"} Nov 28 17:02:19 crc kubenswrapper[5024]: I1128 17:02:19.360954 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wdtz" event={"ID":"3ef7db62-d78a-4b3d-bb51-c7a2a434d735","Type":"ContainerDied","Data":"7ddae49d83bc1611c3561f9d4f8f513d51b107089a5cfe21e95831394128b7fb"} Nov 28 17:02:19 crc kubenswrapper[5024]: I1128 17:02:19.361004 5024 scope.go:117] "RemoveContainer" containerID="8c0dc02fc15f035e1021c7521149e47a36485db7f8f5abae5d4987f25a18f701" Nov 28 17:02:19 crc kubenswrapper[5024]: I1128 17:02:19.361146 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wdtz" Nov 28 17:02:19 crc kubenswrapper[5024]: I1128 17:02:19.494095 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc8qm" event={"ID":"8fae0fa8-8183-4e44-afed-63a655dd82c5","Type":"ContainerStarted","Data":"2271706b2324792f8ab3fcbb64ab5757d5df325ae50cef2460cb667373cdb2bf"} Nov 28 17:02:19 crc kubenswrapper[5024]: I1128 17:02:19.526399 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqfjv" event={"ID":"1587b87d-29af-4f60-a14f-d5e1dff6f5f2","Type":"ContainerStarted","Data":"78ec764dcaeb663dcb4b75ef03dbd4be4617ca284c535318eb85d27750600480"} Nov 28 17:02:19 crc kubenswrapper[5024]: I1128 17:02:19.539248 5024 scope.go:117] "RemoveContainer" containerID="06fd280eb08b53b2074eeb273c87546c68e0fb9b08802b6ccfa7f81716d3cd68" Nov 28 17:02:19 crc kubenswrapper[5024]: I1128 17:02:19.795303 5024 scope.go:117] "RemoveContainer" containerID="b8a6889521fd5c9f322d2c101db5e836428bdd6d68cab461517d811fe68a1214" Nov 28 17:02:19 crc kubenswrapper[5024]: I1128 17:02:19.991772 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2wdtz"] Nov 28 17:02:19 crc kubenswrapper[5024]: I1128 17:02:19.996369 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2wdtz"] Nov 28 17:02:20 crc kubenswrapper[5024]: I1128 17:02:20.658409 5024 generic.go:334] "Generic (PLEG): container finished" podID="81188cf2-b85a-46bb-baf2-cda9e211eda7" containerID="986a0dde13359c340669624848d2074d35952a29feb574410e5db6055609cad0" exitCode=0 Nov 28 17:02:20 crc kubenswrapper[5024]: I1128 17:02:20.773012 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef7db62-d78a-4b3d-bb51-c7a2a434d735" path="/var/lib/kubelet/pods/3ef7db62-d78a-4b3d-bb51-c7a2a434d735/volumes" Nov 28 17:02:20 crc kubenswrapper[5024]: I1128 17:02:20.773933 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zl4ft" event={"ID":"81188cf2-b85a-46bb-baf2-cda9e211eda7","Type":"ContainerDied","Data":"986a0dde13359c340669624848d2074d35952a29feb574410e5db6055609cad0"} Nov 28 17:02:21 crc kubenswrapper[5024]: I1128 17:02:21.667745 5024 generic.go:334] "Generic (PLEG): container finished" podID="542f05d2-a977-40de-887d-bc3538393234" containerID="12aa07233851b87dbf0bc559b438a71e5f26dfaf92b76d0703bbbe220083ef05" exitCode=0 Nov 28 17:02:21 crc kubenswrapper[5024]: I1128 17:02:21.667993 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gdgdt" event={"ID":"542f05d2-a977-40de-887d-bc3538393234","Type":"ContainerDied","Data":"12aa07233851b87dbf0bc559b438a71e5f26dfaf92b76d0703bbbe220083ef05"} Nov 28 17:02:21 crc kubenswrapper[5024]: I1128 17:02:21.675229 5024 generic.go:334] "Generic (PLEG): container finished" podID="8fae0fa8-8183-4e44-afed-63a655dd82c5" containerID="2271706b2324792f8ab3fcbb64ab5757d5df325ae50cef2460cb667373cdb2bf" exitCode=0 Nov 28 17:02:21 crc kubenswrapper[5024]: I1128 17:02:21.675282 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc8qm" event={"ID":"8fae0fa8-8183-4e44-afed-63a655dd82c5","Type":"ContainerDied","Data":"2271706b2324792f8ab3fcbb64ab5757d5df325ae50cef2460cb667373cdb2bf"} Nov 28 17:02:22 crc kubenswrapper[5024]: I1128 17:02:22.698861 5024 generic.go:334] "Generic (PLEG): container finished" podID="2a0db523-f690-4c23-8324-b417a8ccd4b2" containerID="d70d80e64e2e18a34726389e29c66130c41a076b0ee21e580d4a56e26ca252a8" exitCode=0 Nov 28 17:02:22 crc kubenswrapper[5024]: I1128 17:02:22.699459 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kx8x6" event={"ID":"2a0db523-f690-4c23-8324-b417a8ccd4b2","Type":"ContainerDied","Data":"d70d80e64e2e18a34726389e29c66130c41a076b0ee21e580d4a56e26ca252a8"} Nov 28 17:02:22 crc kubenswrapper[5024]: I1128 17:02:22.705469 5024 generic.go:334] "Generic (PLEG): container finished" podID="f10908eb-32ed-4e49-b1ea-7b627343b29d" containerID="f99c89d30d47ae0260479d7a88fc8826c8ac67cf3effa3b0137593b2afdfb678" exitCode=0 Nov 28 17:02:22 crc kubenswrapper[5024]: I1128 17:02:22.705548 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j64mb" event={"ID":"f10908eb-32ed-4e49-b1ea-7b627343b29d","Type":"ContainerDied","Data":"f99c89d30d47ae0260479d7a88fc8826c8ac67cf3effa3b0137593b2afdfb678"} Nov 28 17:02:24 crc kubenswrapper[5024]: I1128 17:02:24.719725 5024 generic.go:334] "Generic (PLEG): container finished" podID="610e20bb-07aa-46c2-9f83-1711f9133ad0" containerID="8c5a874bf5e6b493a652c8852e1c28eed91009d4dd659ad89ede384139fa110b" exitCode=0 Nov 28 17:02:24 crc kubenswrapper[5024]: I1128 17:02:24.719877 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnzzt" event={"ID":"610e20bb-07aa-46c2-9f83-1711f9133ad0","Type":"ContainerDied","Data":"8c5a874bf5e6b493a652c8852e1c28eed91009d4dd659ad89ede384139fa110b"} Nov 28 17:02:24 crc kubenswrapper[5024]: I1128 17:02:24.734552 5024 generic.go:334] "Generic (PLEG): container finished" podID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" containerID="78ec764dcaeb663dcb4b75ef03dbd4be4617ca284c535318eb85d27750600480" exitCode=0 Nov 28 17:02:24 crc kubenswrapper[5024]: I1128 17:02:24.734624 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqfjv" event={"ID":"1587b87d-29af-4f60-a14f-d5e1dff6f5f2","Type":"ContainerDied","Data":"78ec764dcaeb663dcb4b75ef03dbd4be4617ca284c535318eb85d27750600480"} Nov 28 17:02:34 crc kubenswrapper[5024]: I1128 17:02:34.844436 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnzzt" event={"ID":"610e20bb-07aa-46c2-9f83-1711f9133ad0","Type":"ContainerStarted","Data":"da692b71b387ae09c136f4836eaf2817520448b1bef8f0756610c73541112127"} Nov 28 17:02:34 crc kubenswrapper[5024]: I1128 17:02:34.849097 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j64mb" event={"ID":"f10908eb-32ed-4e49-b1ea-7b627343b29d","Type":"ContainerStarted","Data":"576af8cb21cd7732d93f767a0937b3987e9d629196b3a5dca1628a39588d29a5"} Nov 28 17:02:34 crc kubenswrapper[5024]: I1128 17:02:34.851549 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc8qm" event={"ID":"8fae0fa8-8183-4e44-afed-63a655dd82c5","Type":"ContainerStarted","Data":"b76052db5c5012cf089a1654370e3c881045b6bb58604a4f7013a262fbbef6bf"} Nov 28 17:02:34 crc kubenswrapper[5024]: I1128 17:02:34.853837 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqfjv" event={"ID":"1587b87d-29af-4f60-a14f-d5e1dff6f5f2","Type":"ContainerStarted","Data":"a4c736a350930343aac2858dfad7e47198a5732b12e77bd449bfbbbaf5de2f7f"} Nov 28 17:02:34 crc kubenswrapper[5024]: I1128 17:02:34.856844 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kx8x6" event={"ID":"2a0db523-f690-4c23-8324-b417a8ccd4b2","Type":"ContainerStarted","Data":"5af1910d98817e8fed6c253f99f6ca6db9401f4c1fecf70a7085ba737134be18"} Nov 28 17:02:34 crc kubenswrapper[5024]: I1128 17:02:34.858956 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zl4ft" event={"ID":"81188cf2-b85a-46bb-baf2-cda9e211eda7","Type":"ContainerStarted","Data":"010d3c632ebf08931dce6fcc7db092a070e6a1fcdea794a7494e8db3be774af1"} Nov 28 17:02:34 crc kubenswrapper[5024]: I1128 17:02:34.860981 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gdgdt" event={"ID":"542f05d2-a977-40de-887d-bc3538393234","Type":"ContainerStarted","Data":"92a2e9bfcfe8a39ffce7afabf7e9aa7d7d81f958ce43653d0c1ec8012b34f393"} Nov 28 17:02:34 crc kubenswrapper[5024]: I1128 17:02:34.872991 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pnzzt" podStartSLOduration=3.958660486 podStartE2EDuration="1m23.872951742s" podCreationTimestamp="2025-11-28 17:01:11 +0000 UTC" firstStartedPulling="2025-11-28 17:01:13.887947003 +0000 UTC m=+175.936867908" lastFinishedPulling="2025-11-28 17:02:33.802238259 +0000 UTC m=+255.851159164" observedRunningTime="2025-11-28 17:02:34.868435513 +0000 UTC m=+256.917356418" watchObservedRunningTime="2025-11-28 17:02:34.872951742 +0000 UTC m=+256.921872647" Nov 28 17:02:34 crc kubenswrapper[5024]: I1128 17:02:34.896719 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rc8qm" podStartSLOduration=7.031802647 podStartE2EDuration="1m26.896685825s" podCreationTimestamp="2025-11-28 17:01:08 +0000 UTC" firstStartedPulling="2025-11-28 17:01:13.895349683 +0000 UTC m=+175.944270588" lastFinishedPulling="2025-11-28 17:02:33.760232861 +0000 UTC m=+255.809153766" observedRunningTime="2025-11-28 17:02:34.892067622 +0000 UTC m=+256.940988537" watchObservedRunningTime="2025-11-28 17:02:34.896685825 +0000 UTC m=+256.945606740" Nov 28 17:02:34 crc kubenswrapper[5024]: I1128 17:02:34.956505 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gdgdt" podStartSLOduration=3.934856334 podStartE2EDuration="1m23.956477993s" podCreationTimestamp="2025-11-28 17:01:11 +0000 UTC" firstStartedPulling="2025-11-28 17:01:13.882343673 +0000 UTC m=+175.931264568" lastFinishedPulling="2025-11-28 17:02:33.903965322 +0000 UTC m=+255.952886227" observedRunningTime="2025-11-28 17:02:34.927722237 +0000 UTC m=+256.976643152" watchObservedRunningTime="2025-11-28 17:02:34.956477993 +0000 UTC m=+257.005398898" Nov 28 17:02:34 crc kubenswrapper[5024]: I1128 17:02:34.961098 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kx8x6" podStartSLOduration=7.061308622 podStartE2EDuration="1m26.961078935s" podCreationTimestamp="2025-11-28 17:01:08 +0000 UTC" firstStartedPulling="2025-11-28 17:01:13.903110144 +0000 UTC m=+175.952031049" lastFinishedPulling="2025-11-28 17:02:33.802880467 +0000 UTC m=+255.851801362" observedRunningTime="2025-11-28 17:02:34.955230777 +0000 UTC m=+257.004151682" watchObservedRunningTime="2025-11-28 17:02:34.961078935 +0000 UTC m=+257.009999850" Nov 28 17:02:34 crc kubenswrapper[5024]: I1128 17:02:34.981755 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j64mb" podStartSLOduration=5.992829662 podStartE2EDuration="1m26.981729929s" podCreationTimestamp="2025-11-28 17:01:08 +0000 UTC" firstStartedPulling="2025-11-28 17:01:12.894938267 +0000 UTC m=+174.943859172" lastFinishedPulling="2025-11-28 17:02:33.883838534 +0000 UTC m=+255.932759439" observedRunningTime="2025-11-28 17:02:34.977293731 +0000 UTC m=+257.026214656" watchObservedRunningTime="2025-11-28 17:02:34.981729929 +0000 UTC m=+257.030650834" Nov 28 17:02:35 crc kubenswrapper[5024]: I1128 17:02:35.001352 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lqfjv" podStartSLOduration=3.124390997 podStartE2EDuration="1m23.001326022s" podCreationTimestamp="2025-11-28 17:01:12 +0000 UTC" firstStartedPulling="2025-11-28 17:01:13.900034797 +0000 UTC m=+175.948955702" lastFinishedPulling="2025-11-28 17:02:33.776969832 +0000 UTC m=+255.825890727" observedRunningTime="2025-11-28 17:02:34.996638837 +0000 UTC m=+257.045559742" watchObservedRunningTime="2025-11-28 17:02:35.001326022 +0000 UTC m=+257.050246927" Nov 28 17:02:35 crc kubenswrapper[5024]: I1128 17:02:35.025934 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zl4ft" podStartSLOduration=4.119099583 podStartE2EDuration="1m25.025906699s" podCreationTimestamp="2025-11-28 17:01:10 +0000 UTC" firstStartedPulling="2025-11-28 17:01:12.853199179 +0000 UTC m=+174.902120084" lastFinishedPulling="2025-11-28 17:02:33.760006295 +0000 UTC m=+255.808927200" observedRunningTime="2025-11-28 17:02:35.02212952 +0000 UTC m=+257.071050425" watchObservedRunningTime="2025-11-28 17:02:35.025906699 +0000 UTC m=+257.074827604" Nov 28 17:02:35 crc kubenswrapper[5024]: I1128 17:02:35.777623 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jhtl"] Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.514811 5024 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 17:02:38 crc kubenswrapper[5024]: E1128 17:02:38.515393 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef7db62-d78a-4b3d-bb51-c7a2a434d735" containerName="registry-server" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.515408 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef7db62-d78a-4b3d-bb51-c7a2a434d735" containerName="registry-server" Nov 28 17:02:38 crc kubenswrapper[5024]: E1128 17:02:38.515417 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80e15c22-8ae8-41b0-a8e4-ab8f153f0432" containerName="pruner" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.515423 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="80e15c22-8ae8-41b0-a8e4-ab8f153f0432" containerName="pruner" Nov 28 17:02:38 crc kubenswrapper[5024]: E1128 17:02:38.515445 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef7db62-d78a-4b3d-bb51-c7a2a434d735" containerName="extract-content" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.515450 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef7db62-d78a-4b3d-bb51-c7a2a434d735" containerName="extract-content" Nov 28 17:02:38 crc kubenswrapper[5024]: E1128 17:02:38.515459 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef7db62-d78a-4b3d-bb51-c7a2a434d735" containerName="extract-utilities" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.515465 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef7db62-d78a-4b3d-bb51-c7a2a434d735" containerName="extract-utilities" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.515584 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="80e15c22-8ae8-41b0-a8e4-ab8f153f0432" containerName="pruner" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.515595 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef7db62-d78a-4b3d-bb51-c7a2a434d735" containerName="registry-server" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.516010 5024 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.516218 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.516467 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1" gracePeriod=15 Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.516475 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f" gracePeriod=15 Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.516501 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db" gracePeriod=15 Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.516545 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52" gracePeriod=15 Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.516607 5024 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.516529 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72" gracePeriod=15 Nov 28 17:02:38 crc kubenswrapper[5024]: E1128 17:02:38.516880 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.517314 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 17:02:38 crc kubenswrapper[5024]: E1128 17:02:38.517332 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.517345 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 17:02:38 crc kubenswrapper[5024]: E1128 17:02:38.517363 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.517371 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 17:02:38 crc kubenswrapper[5024]: E1128 17:02:38.517389 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.517396 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 17:02:38 crc kubenswrapper[5024]: E1128 17:02:38.517414 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.517421 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 17:02:38 crc kubenswrapper[5024]: E1128 17:02:38.517431 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.517439 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 17:02:38 crc kubenswrapper[5024]: E1128 17:02:38.517451 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.517460 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.517596 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.517610 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.517619 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.517629 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.517645 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.517655 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.525611 5024 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.561406 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.614167 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.614235 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.614292 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.614320 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.614342 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.614444 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.614464 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.614487 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.715881 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.715972 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.716036 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.716069 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.716027 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.716110 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.716140 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.716169 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.716175 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.716197 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.716213 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.716243 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.716279 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.716300 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.716354 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.716389 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: I1128 17:02:38.859532 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:38 crc kubenswrapper[5024]: E1128 17:02:38.887443 5024 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.141:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c3a64dc1c2f8a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 17:02:38.886367114 +0000 UTC m=+260.935288009,LastTimestamp:2025-11-28 17:02:38.886367114 +0000 UTC m=+260.935288009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.738900 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.739267 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.781457 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.782395 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.782938 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.790246 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.790320 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.837841 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.838564 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.839091 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.839442 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.891270 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"869acfd7baf3f32f7f88c5cf130a0249bc97e7280c9f245fdccda239361ff53d"} Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.928713 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.929406 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.929805 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.930081 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.932477 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.933409 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.933933 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.934328 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.937533 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.938360 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.977685 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.978531 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.978902 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.979528 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:39 crc kubenswrapper[5024]: I1128 17:02:39.979742 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: E1128 17:02:40.062224 5024 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: E1128 17:02:40.062860 5024 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: E1128 17:02:40.063119 5024 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: E1128 17:02:40.063309 5024 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: E1128 17:02:40.063499 5024 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.063538 5024 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 28 17:02:40 crc kubenswrapper[5024]: E1128 17:02:40.063739 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" interval="200ms" Nov 28 17:02:40 crc kubenswrapper[5024]: E1128 17:02:40.265052 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" interval="400ms" Nov 28 17:02:40 crc kubenswrapper[5024]: E1128 17:02:40.666663 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" interval="800ms" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.898328 5024 generic.go:334] "Generic (PLEG): container finished" podID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" containerID="c6629e06455abb7e9c441ee39b1e3ad793856d0bb0d1714b52c1210863b8e0a7" exitCode=0 Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.898406 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b4e663a3-b7d3-48f0-876c-8365348bc6ca","Type":"ContainerDied","Data":"c6629e06455abb7e9c441ee39b1e3ad793856d0bb0d1714b52c1210863b8e0a7"} Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.899212 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.899577 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.899859 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.899918 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1c9833532f8693fe0eeaa9d9225056bdc41aeb73670f47ce60221084509438ff"} Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.902262 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.902606 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.902916 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.903167 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.903381 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.903613 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.903822 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.908900 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.911215 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.911904 5024 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f" exitCode=0 Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.911940 5024 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52" exitCode=0 Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.911950 5024 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db" exitCode=0 Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.911960 5024 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72" exitCode=2 Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.911973 5024 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1" exitCode=0 Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.911981 5024 scope.go:117] "RemoveContainer" containerID="6fa2dcf7902826b2ca4a4fc8c07d963d1c651f76cc49f8f19f9a13802e938b96" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.950500 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.951170 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.951495 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.951730 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.951978 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:40 crc kubenswrapper[5024]: I1128 17:02:40.952224 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.089500 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.090211 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.128328 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.129235 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.129593 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.129998 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.130750 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.131177 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.131534 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.411339 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.412360 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.413160 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.413510 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.413824 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.414119 5024 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.414522 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.414889 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.415213 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.458267 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.458692 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.458870 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.458411 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.458793 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.458909 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.459851 5024 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.459891 5024 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.459907 5024 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.465402 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.465590 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:02:41 crc kubenswrapper[5024]: E1128 17:02:41.469004 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" interval="1.6s" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.508232 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.508638 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.508837 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.509116 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.509501 5024 status_manager.go:851] "Failed to get status for pod" podUID="542f05d2-a977-40de-887d-bc3538393234" pod="openshift-marketplace/redhat-marketplace-gdgdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gdgdt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.509729 5024 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.509932 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.510168 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.510525 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.920603 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.922696 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.922721 5024 scope.go:117] "RemoveContainer" containerID="0b9b6ac01fcffb3095c8ae4a658e9082765629d47eb4c6031b3ac8b15dbb2c4f" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.945590 5024 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.946137 5024 status_manager.go:851] "Failed to get status for pod" podUID="542f05d2-a977-40de-887d-bc3538393234" pod="openshift-marketplace/redhat-marketplace-gdgdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gdgdt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.946732 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.947183 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.947581 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.949215 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.949523 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.949830 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.953756 5024 scope.go:117] "RemoveContainer" containerID="2e3be71c6ad5205a60a2b0cdbb53d8ab86dd4f1c7ae3207a212139163a9aab52" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.979638 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.979927 5024 scope.go:117] "RemoveContainer" containerID="432126aaa0cf925f5d5c0d3459ef15d6297b1a39507a6d28931afc027f97c8db" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.980321 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.980779 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.981447 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.981699 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.981957 5024 status_manager.go:851] "Failed to get status for pod" podUID="542f05d2-a977-40de-887d-bc3538393234" pod="openshift-marketplace/redhat-marketplace-gdgdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gdgdt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.982235 5024 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.982537 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.982936 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.995815 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.996386 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.997455 5024 status_manager.go:851] "Failed to get status for pod" podUID="542f05d2-a977-40de-887d-bc3538393234" pod="openshift-marketplace/redhat-marketplace-gdgdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gdgdt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.997830 5024 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.998331 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.998631 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.998805 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.999014 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:41 crc kubenswrapper[5024]: I1128 17:02:41.999283 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.000222 5024 scope.go:117] "RemoveContainer" containerID="803e4e70b09349f83b9de3226a84a2319a735c8d53d24be264cc0ba5ab0a4a72" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.001447 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.001484 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.017953 5024 scope.go:117] "RemoveContainer" containerID="578494938b16b514e5bb4a8e6f6ddac931eeaa4ee2f1cddd6fbb71fb0b9208d1" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.045115 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.046162 5024 status_manager.go:851] "Failed to get status for pod" podUID="542f05d2-a977-40de-887d-bc3538393234" pod="openshift-marketplace/redhat-marketplace-gdgdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gdgdt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.046492 5024 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.047191 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.047561 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.047979 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.048207 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.049186 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.049903 5024 status_manager.go:851] "Failed to get status for pod" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" pod="openshift-marketplace/redhat-operators-pnzzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pnzzt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.050161 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.050211 5024 scope.go:117] "RemoveContainer" containerID="2ddf5980ba5019e75eeb75dc617874990a096fb99a7266d5f7a423e04f489a75" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.206993 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.207644 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.207944 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.208279 5024 status_manager.go:851] "Failed to get status for pod" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" pod="openshift-marketplace/redhat-operators-pnzzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pnzzt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.208464 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.208626 5024 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.208979 5024 status_manager.go:851] "Failed to get status for pod" podUID="542f05d2-a977-40de-887d-bc3538393234" pod="openshift-marketplace/redhat-marketplace-gdgdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gdgdt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.209345 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.209589 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.209848 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.270777 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4e663a3-b7d3-48f0-876c-8365348bc6ca-kube-api-access\") pod \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\" (UID: \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\") " Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.270859 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b4e663a3-b7d3-48f0-876c-8365348bc6ca-var-lock\") pod \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\" (UID: \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\") " Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.270923 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4e663a3-b7d3-48f0-876c-8365348bc6ca-kubelet-dir\") pod \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\" (UID: \"b4e663a3-b7d3-48f0-876c-8365348bc6ca\") " Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.271081 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4e663a3-b7d3-48f0-876c-8365348bc6ca-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b4e663a3-b7d3-48f0-876c-8365348bc6ca" (UID: "b4e663a3-b7d3-48f0-876c-8365348bc6ca"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.271081 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4e663a3-b7d3-48f0-876c-8365348bc6ca-var-lock" (OuterVolumeSpecName: "var-lock") pod "b4e663a3-b7d3-48f0-876c-8365348bc6ca" (UID: "b4e663a3-b7d3-48f0-876c-8365348bc6ca"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.271287 5024 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b4e663a3-b7d3-48f0-876c-8365348bc6ca-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.271304 5024 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4e663a3-b7d3-48f0-876c-8365348bc6ca-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.278372 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e663a3-b7d3-48f0-876c-8365348bc6ca-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b4e663a3-b7d3-48f0-876c-8365348bc6ca" (UID: "b4e663a3-b7d3-48f0-876c-8365348bc6ca"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:02:42 crc kubenswrapper[5024]: E1128 17:02:42.284868 5024 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.141:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c3a64dc1c2f8a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 17:02:38.886367114 +0000 UTC m=+260.935288009,LastTimestamp:2025-11-28 17:02:38.886367114 +0000 UTC m=+260.935288009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.332500 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.333560 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.372154 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4e663a3-b7d3-48f0-876c-8365348bc6ca-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.377526 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.378397 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.378815 5024 status_manager.go:851] "Failed to get status for pod" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" pod="openshift-marketplace/redhat-operators-lqfjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lqfjv\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.379435 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.379710 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.380024 5024 status_manager.go:851] "Failed to get status for pod" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" pod="openshift-marketplace/redhat-operators-pnzzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pnzzt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.380389 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.380650 5024 status_manager.go:851] "Failed to get status for pod" podUID="542f05d2-a977-40de-887d-bc3538393234" pod="openshift-marketplace/redhat-marketplace-gdgdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gdgdt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.380947 5024 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.381260 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.381534 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.509248 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.930387 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.930449 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b4e663a3-b7d3-48f0-876c-8365348bc6ca","Type":"ContainerDied","Data":"804c8a5f50123b53457bce0764d6d8f6a9d7152666b7bd9595956f1a59c67e9a"} Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.930532 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="804c8a5f50123b53457bce0764d6d8f6a9d7152666b7bd9595956f1a59c67e9a" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.937668 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.938134 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.938562 5024 status_manager.go:851] "Failed to get status for pod" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" pod="openshift-marketplace/redhat-operators-pnzzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pnzzt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.939138 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.939713 5024 status_manager.go:851] "Failed to get status for pod" podUID="542f05d2-a977-40de-887d-bc3538393234" pod="openshift-marketplace/redhat-marketplace-gdgdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gdgdt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.940314 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.940777 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.941273 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.941776 5024 status_manager.go:851] "Failed to get status for pod" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" pod="openshift-marketplace/redhat-operators-lqfjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lqfjv\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.985865 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.986663 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.987385 5024 status_manager.go:851] "Failed to get status for pod" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" pod="openshift-marketplace/redhat-operators-lqfjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lqfjv\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.996524 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.997571 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.998211 5024 status_manager.go:851] "Failed to get status for pod" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" pod="openshift-marketplace/redhat-operators-pnzzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pnzzt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:42 crc kubenswrapper[5024]: I1128 17:02:42.998932 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:43 crc kubenswrapper[5024]: I1128 17:02:43.000466 5024 status_manager.go:851] "Failed to get status for pod" podUID="542f05d2-a977-40de-887d-bc3538393234" pod="openshift-marketplace/redhat-marketplace-gdgdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gdgdt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:43 crc kubenswrapper[5024]: I1128 17:02:43.002223 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:43 crc kubenswrapper[5024]: I1128 17:02:43.002545 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:43 crc kubenswrapper[5024]: I1128 17:02:43.002856 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:02:43 crc kubenswrapper[5024]: I1128 17:02:43.003380 5024 status_manager.go:851] "Failed to get status for pod" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" pod="openshift-marketplace/redhat-operators-pnzzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pnzzt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:43 crc kubenswrapper[5024]: I1128 17:02:43.004063 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:43 crc kubenswrapper[5024]: I1128 17:02:43.004513 5024 status_manager.go:851] "Failed to get status for pod" podUID="542f05d2-a977-40de-887d-bc3538393234" pod="openshift-marketplace/redhat-marketplace-gdgdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gdgdt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:43 crc kubenswrapper[5024]: I1128 17:02:43.004916 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:43 crc kubenswrapper[5024]: I1128 17:02:43.005303 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:43 crc kubenswrapper[5024]: I1128 17:02:43.005796 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:43 crc kubenswrapper[5024]: I1128 17:02:43.006256 5024 status_manager.go:851] "Failed to get status for pod" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" pod="openshift-marketplace/redhat-operators-lqfjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lqfjv\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:43 crc kubenswrapper[5024]: I1128 17:02:43.006556 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:43 crc kubenswrapper[5024]: I1128 17:02:43.006923 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:43 crc kubenswrapper[5024]: E1128 17:02:43.070855 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" interval="3.2s" Nov 28 17:02:46 crc kubenswrapper[5024]: E1128 17:02:46.272599 5024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.141:6443: connect: connection refused" interval="6.4s" Nov 28 17:02:48 crc kubenswrapper[5024]: I1128 17:02:48.501708 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:48 crc kubenswrapper[5024]: I1128 17:02:48.502102 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:48 crc kubenswrapper[5024]: I1128 17:02:48.502332 5024 status_manager.go:851] "Failed to get status for pod" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" pod="openshift-marketplace/redhat-operators-pnzzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pnzzt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:48 crc kubenswrapper[5024]: I1128 17:02:48.502534 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:48 crc kubenswrapper[5024]: I1128 17:02:48.502774 5024 status_manager.go:851] "Failed to get status for pod" podUID="542f05d2-a977-40de-887d-bc3538393234" pod="openshift-marketplace/redhat-marketplace-gdgdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gdgdt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:48 crc kubenswrapper[5024]: I1128 17:02:48.503132 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:48 crc kubenswrapper[5024]: I1128 17:02:48.503510 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:48 crc kubenswrapper[5024]: I1128 17:02:48.504014 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:48 crc kubenswrapper[5024]: I1128 17:02:48.504392 5024 status_manager.go:851] "Failed to get status for pod" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" pod="openshift-marketplace/redhat-operators-lqfjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lqfjv\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:49 crc kubenswrapper[5024]: I1128 17:02:49.497835 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:49 crc kubenswrapper[5024]: I1128 17:02:49.500047 5024 status_manager.go:851] "Failed to get status for pod" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" pod="openshift-marketplace/redhat-operators-pnzzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pnzzt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:49 crc kubenswrapper[5024]: I1128 17:02:49.500711 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:49 crc kubenswrapper[5024]: I1128 17:02:49.500984 5024 status_manager.go:851] "Failed to get status for pod" podUID="542f05d2-a977-40de-887d-bc3538393234" pod="openshift-marketplace/redhat-marketplace-gdgdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gdgdt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:49 crc kubenswrapper[5024]: I1128 17:02:49.501323 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:49 crc kubenswrapper[5024]: I1128 17:02:49.501608 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:49 crc kubenswrapper[5024]: I1128 17:02:49.501948 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:49 crc kubenswrapper[5024]: I1128 17:02:49.502763 5024 status_manager.go:851] "Failed to get status for pod" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" pod="openshift-marketplace/redhat-operators-lqfjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lqfjv\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:49 crc kubenswrapper[5024]: I1128 17:02:49.503268 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:49 crc kubenswrapper[5024]: I1128 17:02:49.503954 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:49 crc kubenswrapper[5024]: I1128 17:02:49.514352 5024 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9f0bfe7-ae68-4218-b0ca-735fa4098f1c" Nov 28 17:02:49 crc kubenswrapper[5024]: I1128 17:02:49.514390 5024 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9f0bfe7-ae68-4218-b0ca-735fa4098f1c" Nov 28 17:02:49 crc kubenswrapper[5024]: E1128 17:02:49.514862 5024 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:49 crc kubenswrapper[5024]: I1128 17:02:49.515499 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:49 crc kubenswrapper[5024]: W1128 17:02:49.545735 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-a3efe66d2cbe2b9f01edb6f6c53bdee0da0f3c717cf67e91ca44ea961196ce3f WatchSource:0}: Error finding container a3efe66d2cbe2b9f01edb6f6c53bdee0da0f3c717cf67e91ca44ea961196ce3f: Status 404 returned error can't find the container with id a3efe66d2cbe2b9f01edb6f6c53bdee0da0f3c717cf67e91ca44ea961196ce3f Nov 28 17:02:49 crc kubenswrapper[5024]: I1128 17:02:49.974253 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a3efe66d2cbe2b9f01edb6f6c53bdee0da0f3c717cf67e91ca44ea961196ce3f"} Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.990293 5024 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6873f23861a5f0c5be09198e39488d2312fb2892b42b1ed54a07e5246be0e03d" exitCode=0 Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.990412 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6873f23861a5f0c5be09198e39488d2312fb2892b42b1ed54a07e5246be0e03d"} Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.990872 5024 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9f0bfe7-ae68-4218-b0ca-735fa4098f1c" Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.991136 5024 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9f0bfe7-ae68-4218-b0ca-735fa4098f1c" Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.992005 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:51 crc kubenswrapper[5024]: E1128 17:02:51.992178 5024 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.992582 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.993119 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.993789 5024 status_manager.go:851] "Failed to get status for pod" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" pod="openshift-marketplace/redhat-operators-lqfjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lqfjv\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.994031 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.994198 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.994425 5024 status_manager.go:851] "Failed to get status for pod" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" pod="openshift-marketplace/redhat-operators-pnzzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pnzzt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.994605 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.995057 5024 status_manager.go:851] "Failed to get status for pod" podUID="542f05d2-a977-40de-887d-bc3538393234" pod="openshift-marketplace/redhat-marketplace-gdgdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gdgdt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.997076 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.997153 5024 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f" exitCode=1 Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.997213 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f"} Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.998011 5024 scope.go:117] "RemoveContainer" containerID="9a214d54b6abce0aad4bdf73cb791e7c17ac4004939dd5c47664a2ac1400841f" Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.999252 5024 status_manager.go:851] "Failed to get status for pod" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" pod="openshift-marketplace/certified-operators-j64mb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j64mb\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:51 crc kubenswrapper[5024]: I1128 17:02:51.999802 5024 status_manager.go:851] "Failed to get status for pod" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" pod="openshift-marketplace/redhat-operators-lqfjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lqfjv\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[5024]: I1128 17:02:52.000273 5024 status_manager.go:851] "Failed to get status for pod" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" pod="openshift-marketplace/redhat-marketplace-zl4ft" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zl4ft\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[5024]: I1128 17:02:52.000530 5024 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[5024]: I1128 17:02:52.001135 5024 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[5024]: I1128 17:02:52.001595 5024 status_manager.go:851] "Failed to get status for pod" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" pod="openshift-marketplace/redhat-operators-pnzzt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pnzzt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[5024]: I1128 17:02:52.001922 5024 status_manager.go:851] "Failed to get status for pod" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" pod="openshift-marketplace/community-operators-rc8qm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rc8qm\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[5024]: I1128 17:02:52.002310 5024 status_manager.go:851] "Failed to get status for pod" podUID="542f05d2-a977-40de-887d-bc3538393234" pod="openshift-marketplace/redhat-marketplace-gdgdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gdgdt\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[5024]: I1128 17:02:52.002633 5024 status_manager.go:851] "Failed to get status for pod" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" pod="openshift-marketplace/certified-operators-kx8x6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kx8x6\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[5024]: I1128 17:02:52.002988 5024 status_manager.go:851] "Failed to get status for pod" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.141:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[5024]: E1128 17:02:52.286349 5024 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.141:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c3a64dc1c2f8a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 17:02:38.886367114 +0000 UTC m=+260.935288009,LastTimestamp:2025-11-28 17:02:38.886367114 +0000 UTC m=+260.935288009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 17:02:52 crc kubenswrapper[5024]: I1128 17:02:52.441920 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 17:02:53 crc kubenswrapper[5024]: I1128 17:02:53.007800 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c397d57dc01bbe6f75d8b33eaaad5283bcb16087dee326045a1c06cc3f3609b"} Nov 28 17:02:53 crc kubenswrapper[5024]: I1128 17:02:53.007857 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9fa4f973b7271ede61b279348b6a741499c231b444a30ee563b74d6c4159dceb"} Nov 28 17:02:53 crc kubenswrapper[5024]: I1128 17:02:53.007879 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"21f1077b0b9f77c613356b5d755b8884b86d431f26adad7710f72588c1efdab8"} Nov 28 17:02:53 crc kubenswrapper[5024]: I1128 17:02:53.007888 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5324073ac88182e2f8f7372255f7cc91f990dc65d1c23cca9c7508ca7c95a1b5"} Nov 28 17:02:53 crc kubenswrapper[5024]: I1128 17:02:53.029915 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 17:02:53 crc kubenswrapper[5024]: I1128 17:02:53.030010 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"302e059f607ff18b6867cf8ecd6fa1a9d79e78531a2938068df875ba66c2b465"} Nov 28 17:02:54 crc kubenswrapper[5024]: I1128 17:02:54.040211 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c7273ebbcb2df908b8634531af5de2e67db32dc020dba2cd2f2e263012b3f74d"} Nov 28 17:02:54 crc kubenswrapper[5024]: I1128 17:02:54.040427 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:54 crc kubenswrapper[5024]: I1128 17:02:54.040579 5024 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9f0bfe7-ae68-4218-b0ca-735fa4098f1c" Nov 28 17:02:54 crc kubenswrapper[5024]: I1128 17:02:54.040606 5024 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9f0bfe7-ae68-4218-b0ca-735fa4098f1c" Nov 28 17:02:54 crc kubenswrapper[5024]: I1128 17:02:54.515822 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:54 crc kubenswrapper[5024]: I1128 17:02:54.515861 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:54 crc kubenswrapper[5024]: I1128 17:02:54.524135 5024 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]log ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]etcd ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/generic-apiserver-start-informers ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/priority-and-fairness-filter ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/start-apiextensions-informers ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/start-apiextensions-controllers ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/crd-informer-synced ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/start-system-namespaces-controller ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 28 17:02:54 crc kubenswrapper[5024]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 28 17:02:54 crc kubenswrapper[5024]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/bootstrap-controller ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/start-kube-aggregator-informers ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/apiservice-registration-controller ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/apiservice-discovery-controller ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]autoregister-completion ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/apiservice-openapi-controller ok Nov 28 17:02:54 crc kubenswrapper[5024]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 28 17:02:54 crc kubenswrapper[5024]: livez check failed Nov 28 17:02:54 crc kubenswrapper[5024]: I1128 17:02:54.524195 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:02:55 crc kubenswrapper[5024]: I1128 17:02:55.955845 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 17:02:59 crc kubenswrapper[5024]: I1128 17:02:59.050932 5024 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:59 crc kubenswrapper[5024]: I1128 17:02:59.524668 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:59 crc kubenswrapper[5024]: I1128 17:02:59.528226 5024 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1790f8b5-663c-4b8f-bbb5-d619da24252a" Nov 28 17:03:00 crc kubenswrapper[5024]: I1128 17:03:00.082841 5024 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9f0bfe7-ae68-4218-b0ca-735fa4098f1c" Nov 28 17:03:00 crc kubenswrapper[5024]: I1128 17:03:00.082881 5024 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9f0bfe7-ae68-4218-b0ca-735fa4098f1c" Nov 28 17:03:00 crc kubenswrapper[5024]: I1128 17:03:00.813225 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" podUID="231e7091-0809-44e9-9d1a-d5a1ea092a64" containerName="oauth-openshift" containerID="cri-o://bc21249d02f9c398c1a5ee9803f1b19752c4c0f6419a7f973cf32fc404cbb3f5" gracePeriod=15 Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.093838 5024 generic.go:334] "Generic (PLEG): container finished" podID="231e7091-0809-44e9-9d1a-d5a1ea092a64" containerID="bc21249d02f9c398c1a5ee9803f1b19752c4c0f6419a7f973cf32fc404cbb3f5" exitCode=0 Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.093981 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" event={"ID":"231e7091-0809-44e9-9d1a-d5a1ea092a64","Type":"ContainerDied","Data":"bc21249d02f9c398c1a5ee9803f1b19752c4c0f6419a7f973cf32fc404cbb3f5"} Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.094253 5024 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9f0bfe7-ae68-4218-b0ca-735fa4098f1c" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.094269 5024 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9f0bfe7-ae68-4218-b0ca-735fa4098f1c" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.100520 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.197896 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.257945 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-serving-cert\") pod \"231e7091-0809-44e9-9d1a-d5a1ea092a64\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.258244 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-trusted-ca-bundle\") pod \"231e7091-0809-44e9-9d1a-d5a1ea092a64\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.258370 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgqk7\" (UniqueName: \"kubernetes.io/projected/231e7091-0809-44e9-9d1a-d5a1ea092a64-kube-api-access-vgqk7\") pod \"231e7091-0809-44e9-9d1a-d5a1ea092a64\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.258449 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-idp-0-file-data\") pod \"231e7091-0809-44e9-9d1a-d5a1ea092a64\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.258533 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-session\") pod \"231e7091-0809-44e9-9d1a-d5a1ea092a64\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.258616 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-ocp-branding-template\") pod \"231e7091-0809-44e9-9d1a-d5a1ea092a64\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.258746 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-service-ca\") pod \"231e7091-0809-44e9-9d1a-d5a1ea092a64\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.258823 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-router-certs\") pod \"231e7091-0809-44e9-9d1a-d5a1ea092a64\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.258898 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-audit-policies\") pod \"231e7091-0809-44e9-9d1a-d5a1ea092a64\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.258970 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-error\") pod \"231e7091-0809-44e9-9d1a-d5a1ea092a64\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.259058 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-cliconfig\") pod \"231e7091-0809-44e9-9d1a-d5a1ea092a64\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.259145 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-provider-selection\") pod \"231e7091-0809-44e9-9d1a-d5a1ea092a64\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.259272 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/231e7091-0809-44e9-9d1a-d5a1ea092a64-audit-dir\") pod \"231e7091-0809-44e9-9d1a-d5a1ea092a64\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.259357 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-login\") pod \"231e7091-0809-44e9-9d1a-d5a1ea092a64\" (UID: \"231e7091-0809-44e9-9d1a-d5a1ea092a64\") " Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.274856 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "231e7091-0809-44e9-9d1a-d5a1ea092a64" (UID: "231e7091-0809-44e9-9d1a-d5a1ea092a64"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.275400 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "231e7091-0809-44e9-9d1a-d5a1ea092a64" (UID: "231e7091-0809-44e9-9d1a-d5a1ea092a64"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.275528 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/231e7091-0809-44e9-9d1a-d5a1ea092a64-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "231e7091-0809-44e9-9d1a-d5a1ea092a64" (UID: "231e7091-0809-44e9-9d1a-d5a1ea092a64"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.276061 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "231e7091-0809-44e9-9d1a-d5a1ea092a64" (UID: "231e7091-0809-44e9-9d1a-d5a1ea092a64"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.276504 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "231e7091-0809-44e9-9d1a-d5a1ea092a64" (UID: "231e7091-0809-44e9-9d1a-d5a1ea092a64"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.277555 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "231e7091-0809-44e9-9d1a-d5a1ea092a64" (UID: "231e7091-0809-44e9-9d1a-d5a1ea092a64"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.278173 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "231e7091-0809-44e9-9d1a-d5a1ea092a64" (UID: "231e7091-0809-44e9-9d1a-d5a1ea092a64"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.278829 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "231e7091-0809-44e9-9d1a-d5a1ea092a64" (UID: "231e7091-0809-44e9-9d1a-d5a1ea092a64"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.279442 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "231e7091-0809-44e9-9d1a-d5a1ea092a64" (UID: "231e7091-0809-44e9-9d1a-d5a1ea092a64"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.279869 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/231e7091-0809-44e9-9d1a-d5a1ea092a64-kube-api-access-vgqk7" (OuterVolumeSpecName: "kube-api-access-vgqk7") pod "231e7091-0809-44e9-9d1a-d5a1ea092a64" (UID: "231e7091-0809-44e9-9d1a-d5a1ea092a64"). InnerVolumeSpecName "kube-api-access-vgqk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.280178 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "231e7091-0809-44e9-9d1a-d5a1ea092a64" (UID: "231e7091-0809-44e9-9d1a-d5a1ea092a64"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.280489 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "231e7091-0809-44e9-9d1a-d5a1ea092a64" (UID: "231e7091-0809-44e9-9d1a-d5a1ea092a64"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.280925 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "231e7091-0809-44e9-9d1a-d5a1ea092a64" (UID: "231e7091-0809-44e9-9d1a-d5a1ea092a64"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.281242 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "231e7091-0809-44e9-9d1a-d5a1ea092a64" (UID: "231e7091-0809-44e9-9d1a-d5a1ea092a64"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.362943 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.362986 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.363000 5024 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.363011 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.363039 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.363049 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.363061 5024 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/231e7091-0809-44e9-9d1a-d5a1ea092a64-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.363070 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.363081 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.363090 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.363103 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgqk7\" (UniqueName: \"kubernetes.io/projected/231e7091-0809-44e9-9d1a-d5a1ea092a64-kube-api-access-vgqk7\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.363118 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.363130 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:01 crc kubenswrapper[5024]: I1128 17:03:01.363141 5024 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/231e7091-0809-44e9-9d1a-d5a1ea092a64-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:02 crc kubenswrapper[5024]: I1128 17:03:02.101766 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" event={"ID":"231e7091-0809-44e9-9d1a-d5a1ea092a64","Type":"ContainerDied","Data":"a5f598a84dabb88a64291037cf58bfb0ae88070661c646835d6c37f806d6f655"} Nov 28 17:03:02 crc kubenswrapper[5024]: I1128 17:03:02.101847 5024 scope.go:117] "RemoveContainer" containerID="bc21249d02f9c398c1a5ee9803f1b19752c4c0f6419a7f973cf32fc404cbb3f5" Nov 28 17:03:02 crc kubenswrapper[5024]: I1128 17:03:02.101872 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7jhtl" Nov 28 17:03:02 crc kubenswrapper[5024]: I1128 17:03:02.102321 5024 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9f0bfe7-ae68-4218-b0ca-735fa4098f1c" Nov 28 17:03:02 crc kubenswrapper[5024]: I1128 17:03:02.102341 5024 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9f0bfe7-ae68-4218-b0ca-735fa4098f1c" Nov 28 17:03:02 crc kubenswrapper[5024]: I1128 17:03:02.441545 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 17:03:02 crc kubenswrapper[5024]: I1128 17:03:02.447382 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 17:03:03 crc kubenswrapper[5024]: I1128 17:03:03.121467 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 17:03:08 crc kubenswrapper[5024]: I1128 17:03:08.538130 5024 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1790f8b5-663c-4b8f-bbb5-d619da24252a" Nov 28 17:03:09 crc kubenswrapper[5024]: I1128 17:03:09.135969 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 17:03:09 crc kubenswrapper[5024]: I1128 17:03:09.603488 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 17:03:09 crc kubenswrapper[5024]: I1128 17:03:09.759447 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 17:03:09 crc kubenswrapper[5024]: I1128 17:03:09.883160 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 17:03:10 crc kubenswrapper[5024]: I1128 17:03:10.596181 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 17:03:10 crc kubenswrapper[5024]: I1128 17:03:10.737897 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 17:03:10 crc kubenswrapper[5024]: I1128 17:03:10.757277 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 17:03:11 crc kubenswrapper[5024]: I1128 17:03:11.576392 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 17:03:11 crc kubenswrapper[5024]: I1128 17:03:11.694791 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 17:03:11 crc kubenswrapper[5024]: I1128 17:03:11.803126 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 17:03:11 crc kubenswrapper[5024]: I1128 17:03:11.836777 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 17:03:11 crc kubenswrapper[5024]: I1128 17:03:11.976843 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 17:03:11 crc kubenswrapper[5024]: I1128 17:03:11.993731 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 17:03:12 crc kubenswrapper[5024]: I1128 17:03:12.016232 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 17:03:12 crc kubenswrapper[5024]: I1128 17:03:12.160519 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 17:03:12 crc kubenswrapper[5024]: I1128 17:03:12.164839 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 17:03:12 crc kubenswrapper[5024]: I1128 17:03:12.174949 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 17:03:12 crc kubenswrapper[5024]: I1128 17:03:12.193206 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 17:03:12 crc kubenswrapper[5024]: I1128 17:03:12.213240 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 17:03:12 crc kubenswrapper[5024]: I1128 17:03:12.292901 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 17:03:12 crc kubenswrapper[5024]: I1128 17:03:12.454543 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 17:03:12 crc kubenswrapper[5024]: I1128 17:03:12.516778 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 17:03:12 crc kubenswrapper[5024]: I1128 17:03:12.595803 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 17:03:12 crc kubenswrapper[5024]: I1128 17:03:12.613185 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 17:03:12 crc kubenswrapper[5024]: I1128 17:03:12.769233 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 17:03:12 crc kubenswrapper[5024]: I1128 17:03:12.893547 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 17:03:13 crc kubenswrapper[5024]: I1128 17:03:13.108239 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 17:03:13 crc kubenswrapper[5024]: I1128 17:03:13.188992 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 17:03:13 crc kubenswrapper[5024]: I1128 17:03:13.355490 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 17:03:13 crc kubenswrapper[5024]: I1128 17:03:13.518400 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 17:03:13 crc kubenswrapper[5024]: I1128 17:03:13.523791 5024 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 17:03:13 crc kubenswrapper[5024]: I1128 17:03:13.614565 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 17:03:13 crc kubenswrapper[5024]: I1128 17:03:13.643221 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 17:03:13 crc kubenswrapper[5024]: I1128 17:03:13.914780 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 17:03:13 crc kubenswrapper[5024]: I1128 17:03:13.940131 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 17:03:13 crc kubenswrapper[5024]: I1128 17:03:13.943321 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.005453 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.102597 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.109275 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.209690 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.212867 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.215604 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.328607 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.331112 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.356644 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.424326 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.429284 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.539607 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.584768 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.624809 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.655920 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.710872 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 17:03:14 crc kubenswrapper[5024]: I1128 17:03:14.858973 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.013419 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.014255 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.053219 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.271709 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.350106 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.383947 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.411544 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.431444 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.447907 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.481192 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.505818 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.516305 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.525012 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.558137 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.584372 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.585192 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.616570 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.640872 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.737640 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.807609 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.850040 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.853351 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 17:03:15 crc kubenswrapper[5024]: I1128 17:03:15.991632 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.011522 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.069888 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.089273 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.223186 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.266400 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.352401 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.402876 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.437261 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.445659 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.512547 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.535312 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.540133 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.574173 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.749520 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.758186 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.896154 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 17:03:16 crc kubenswrapper[5024]: I1128 17:03:16.910701 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.035713 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.118398 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.148758 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.229902 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.300004 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.350803 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.466596 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.468417 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.468724 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.490438 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.583465 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.712082 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.818630 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.868912 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.932596 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 17:03:17 crc kubenswrapper[5024]: I1128 17:03:17.954734 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.013585 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.020215 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.028297 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.074764 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.144618 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.213718 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.271226 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.300309 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.320446 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.337538 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.377260 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.382158 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.403566 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.462053 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.508037 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.632505 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.632742 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.701226 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.786129 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.819176 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.829193 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.848047 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.905467 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 17:03:18 crc kubenswrapper[5024]: I1128 17:03:18.930740 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.003793 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.073042 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.125408 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.144331 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.172864 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.328561 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.352338 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.409759 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.411456 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.421715 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.520167 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.520639 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.532698 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.534306 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.543399 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.587343 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.598044 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.664803 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.755004 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.778980 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.836792 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.851774 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.879220 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.919498 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.928482 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.933924 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 17:03:19 crc kubenswrapper[5024]: I1128 17:03:19.995199 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.013771 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.096291 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.232988 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.285501 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.321184 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.415965 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.427983 5024 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.430868 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=42.430844974 podStartE2EDuration="42.430844974s" podCreationTimestamp="2025-11-28 17:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:02:58.756705871 +0000 UTC m=+280.805626786" watchObservedRunningTime="2025-11-28 17:03:20.430844974 +0000 UTC m=+302.479765879" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.434191 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7jhtl","openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.434333 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s"] Nov 28 17:03:20 crc kubenswrapper[5024]: E1128 17:03:20.434626 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231e7091-0809-44e9-9d1a-d5a1ea092a64" containerName="oauth-openshift" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.434654 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="231e7091-0809-44e9-9d1a-d5a1ea092a64" containerName="oauth-openshift" Nov 28 17:03:20 crc kubenswrapper[5024]: E1128 17:03:20.434677 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" containerName="installer" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.434687 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" containerName="installer" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.434791 5024 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9f0bfe7-ae68-4218-b0ca-735fa4098f1c" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.434833 5024 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f9f0bfe7-ae68-4218-b0ca-735fa4098f1c" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.434863 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="231e7091-0809-44e9-9d1a-d5a1ea092a64" containerName="oauth-openshift" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.434882 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e663a3-b7d3-48f0-876c-8365348bc6ca" containerName="installer" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.435723 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.439871 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.440184 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.440210 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.440502 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.440740 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.441148 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.441218 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.441446 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.441523 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.441591 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.441901 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.442239 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.442273 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.449760 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.450645 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.457267 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.481630 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.481605367 podStartE2EDuration="21.481605367s" podCreationTimestamp="2025-11-28 17:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:03:20.481532515 +0000 UTC m=+302.530453420" watchObservedRunningTime="2025-11-28 17:03:20.481605367 +0000 UTC m=+302.530526282" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.509493 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="231e7091-0809-44e9-9d1a-d5a1ea092a64" path="/var/lib/kubelet/pods/231e7091-0809-44e9-9d1a-d5a1ea092a64/volumes" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.532948 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d98336ea-cb72-4207-b7b1-31e7199b819f-audit-dir\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.532998 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-session\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.533048 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.533075 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.533161 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.533221 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.533252 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-service-ca\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.533326 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps5jq\" (UniqueName: \"kubernetes.io/projected/d98336ea-cb72-4207-b7b1-31e7199b819f-kube-api-access-ps5jq\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.533358 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.533405 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-user-template-error\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.533451 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-router-certs\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.533478 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d98336ea-cb72-4207-b7b1-31e7199b819f-audit-policies\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.533510 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.533545 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-user-template-login\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.598421 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.635266 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps5jq\" (UniqueName: \"kubernetes.io/projected/d98336ea-cb72-4207-b7b1-31e7199b819f-kube-api-access-ps5jq\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.635354 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.635403 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-user-template-error\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.635442 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-router-certs\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.635467 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d98336ea-cb72-4207-b7b1-31e7199b819f-audit-policies\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.635491 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.635517 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-user-template-login\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.635545 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d98336ea-cb72-4207-b7b1-31e7199b819f-audit-dir\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.635572 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-session\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.635597 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.635621 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.635643 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.635668 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.635692 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-service-ca\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.636834 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d98336ea-cb72-4207-b7b1-31e7199b819f-audit-dir\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.637277 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d98336ea-cb72-4207-b7b1-31e7199b819f-audit-policies\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.637277 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-service-ca\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.637902 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.638498 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.640461 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.642308 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-router-certs\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.643105 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-session\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.643505 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-user-template-error\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.643574 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.644005 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.644526 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.645320 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-user-template-login\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.650246 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d98336ea-cb72-4207-b7b1-31e7199b819f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.655807 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps5jq\" (UniqueName: \"kubernetes.io/projected/d98336ea-cb72-4207-b7b1-31e7199b819f-kube-api-access-ps5jq\") pod \"oauth-openshift-5795c8b5fb-8lv2s\" (UID: \"d98336ea-cb72-4207-b7b1-31e7199b819f\") " pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.677873 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.706180 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.725450 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.749146 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.756834 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.849253 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.919397 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.937841 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 17:03:20 crc kubenswrapper[5024]: I1128 17:03:20.975904 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s"] Nov 28 17:03:20 crc kubenswrapper[5024]: W1128 17:03:20.982489 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd98336ea_cb72_4207_b7b1_31e7199b819f.slice/crio-fbd55b97d9bdfd96a5b8067db903177eec6aa34883ae42ca68c3b83281b8d5ba WatchSource:0}: Error finding container fbd55b97d9bdfd96a5b8067db903177eec6aa34883ae42ca68c3b83281b8d5ba: Status 404 returned error can't find the container with id fbd55b97d9bdfd96a5b8067db903177eec6aa34883ae42ca68c3b83281b8d5ba Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.023246 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.089748 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.221853 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.230303 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" event={"ID":"d98336ea-cb72-4207-b7b1-31e7199b819f","Type":"ContainerStarted","Data":"fbd55b97d9bdfd96a5b8067db903177eec6aa34883ae42ca68c3b83281b8d5ba"} Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.282813 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.330812 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.332096 5024 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.332387 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1c9833532f8693fe0eeaa9d9225056bdc41aeb73670f47ce60221084509438ff" gracePeriod=5 Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.334334 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.403135 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.408521 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.493755 5024 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.598119 5024 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.603814 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.625364 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.663323 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 17:03:21 crc kubenswrapper[5024]: E1128 17:03:21.667297 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd98336ea_cb72_4207_b7b1_31e7199b819f.slice/crio-3ccb2375cb491c3d1b3f16972b216206330e181cf3a1fc20dd7c77db1a16f405.scope\": RecentStats: unable to find data in memory cache]" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.685675 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.697325 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.720739 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.728881 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.885729 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.910930 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.918857 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 17:03:21 crc kubenswrapper[5024]: I1128 17:03:21.976325 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 17:03:22 crc kubenswrapper[5024]: I1128 17:03:22.039260 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 17:03:22 crc kubenswrapper[5024]: I1128 17:03:22.176523 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 17:03:22 crc kubenswrapper[5024]: I1128 17:03:22.237426 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5795c8b5fb-8lv2s_d98336ea-cb72-4207-b7b1-31e7199b819f/oauth-openshift/0.log" Nov 28 17:03:22 crc kubenswrapper[5024]: I1128 17:03:22.237504 5024 generic.go:334] "Generic (PLEG): container finished" podID="d98336ea-cb72-4207-b7b1-31e7199b819f" containerID="3ccb2375cb491c3d1b3f16972b216206330e181cf3a1fc20dd7c77db1a16f405" exitCode=255 Nov 28 17:03:22 crc kubenswrapper[5024]: I1128 17:03:22.237552 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" event={"ID":"d98336ea-cb72-4207-b7b1-31e7199b819f","Type":"ContainerDied","Data":"3ccb2375cb491c3d1b3f16972b216206330e181cf3a1fc20dd7c77db1a16f405"} Nov 28 17:03:22 crc kubenswrapper[5024]: I1128 17:03:22.238289 5024 scope.go:117] "RemoveContainer" containerID="3ccb2375cb491c3d1b3f16972b216206330e181cf3a1fc20dd7c77db1a16f405" Nov 28 17:03:22 crc kubenswrapper[5024]: I1128 17:03:22.321111 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 17:03:22 crc kubenswrapper[5024]: I1128 17:03:22.373869 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 17:03:22 crc kubenswrapper[5024]: I1128 17:03:22.395404 5024 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 17:03:22 crc kubenswrapper[5024]: I1128 17:03:22.549769 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 17:03:22 crc kubenswrapper[5024]: I1128 17:03:22.616797 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 17:03:22 crc kubenswrapper[5024]: I1128 17:03:22.816805 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 17:03:22 crc kubenswrapper[5024]: I1128 17:03:22.954703 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 17:03:22 crc kubenswrapper[5024]: I1128 17:03:22.989566 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.006714 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.173405 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.245003 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5795c8b5fb-8lv2s_d98336ea-cb72-4207-b7b1-31e7199b819f/oauth-openshift/1.log" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.245444 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5795c8b5fb-8lv2s_d98336ea-cb72-4207-b7b1-31e7199b819f/oauth-openshift/0.log" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.245483 5024 generic.go:334] "Generic (PLEG): container finished" podID="d98336ea-cb72-4207-b7b1-31e7199b819f" containerID="4ad8c4521a3a67347a858d9c71b9242d56206b0807c5d85948b83f9a9ce512a2" exitCode=255 Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.245518 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" event={"ID":"d98336ea-cb72-4207-b7b1-31e7199b819f","Type":"ContainerDied","Data":"4ad8c4521a3a67347a858d9c71b9242d56206b0807c5d85948b83f9a9ce512a2"} Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.245562 5024 scope.go:117] "RemoveContainer" containerID="3ccb2375cb491c3d1b3f16972b216206330e181cf3a1fc20dd7c77db1a16f405" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.246173 5024 scope.go:117] "RemoveContainer" containerID="4ad8c4521a3a67347a858d9c71b9242d56206b0807c5d85948b83f9a9ce512a2" Nov 28 17:03:23 crc kubenswrapper[5024]: E1128 17:03:23.246475 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5795c8b5fb-8lv2s_openshift-authentication(d98336ea-cb72-4207-b7b1-31e7199b819f)\"" pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" podUID="d98336ea-cb72-4207-b7b1-31e7199b819f" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.276861 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.405086 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.536500 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.624505 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.764324 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.883381 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.894098 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.925695 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 17:03:23 crc kubenswrapper[5024]: I1128 17:03:23.974351 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 17:03:24 crc kubenswrapper[5024]: I1128 17:03:24.099328 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 17:03:24 crc kubenswrapper[5024]: I1128 17:03:24.105002 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 17:03:24 crc kubenswrapper[5024]: I1128 17:03:24.197756 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 17:03:24 crc kubenswrapper[5024]: I1128 17:03:24.253494 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5795c8b5fb-8lv2s_d98336ea-cb72-4207-b7b1-31e7199b819f/oauth-openshift/1.log" Nov 28 17:03:24 crc kubenswrapper[5024]: I1128 17:03:24.254757 5024 scope.go:117] "RemoveContainer" containerID="4ad8c4521a3a67347a858d9c71b9242d56206b0807c5d85948b83f9a9ce512a2" Nov 28 17:03:24 crc kubenswrapper[5024]: E1128 17:03:24.255140 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5795c8b5fb-8lv2s_openshift-authentication(d98336ea-cb72-4207-b7b1-31e7199b819f)\"" pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" podUID="d98336ea-cb72-4207-b7b1-31e7199b819f" Nov 28 17:03:24 crc kubenswrapper[5024]: I1128 17:03:24.315588 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 17:03:24 crc kubenswrapper[5024]: I1128 17:03:24.394938 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 17:03:24 crc kubenswrapper[5024]: I1128 17:03:24.526741 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 17:03:24 crc kubenswrapper[5024]: I1128 17:03:24.922255 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 17:03:25 crc kubenswrapper[5024]: I1128 17:03:25.050421 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 17:03:25 crc kubenswrapper[5024]: I1128 17:03:25.052451 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 17:03:25 crc kubenswrapper[5024]: I1128 17:03:25.059608 5024 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 17:03:25 crc kubenswrapper[5024]: I1128 17:03:25.345239 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 17:03:25 crc kubenswrapper[5024]: I1128 17:03:25.418122 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 17:03:25 crc kubenswrapper[5024]: I1128 17:03:25.779891 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 17:03:25 crc kubenswrapper[5024]: I1128 17:03:25.911520 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 17:03:26 crc kubenswrapper[5024]: I1128 17:03:26.113715 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 17:03:26 crc kubenswrapper[5024]: I1128 17:03:26.212978 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 17:03:26 crc kubenswrapper[5024]: I1128 17:03:26.915109 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 17:03:26 crc kubenswrapper[5024]: I1128 17:03:26.915224 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:03:26 crc kubenswrapper[5024]: I1128 17:03:26.986409 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.020819 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.030210 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.092876 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.093000 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.093053 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.093105 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.093159 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.093303 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.093375 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.093429 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.093540 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.093957 5024 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.093979 5024 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.093989 5024 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.094003 5024 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.102092 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.196590 5024 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.275213 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.275290 5024 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1c9833532f8693fe0eeaa9d9225056bdc41aeb73670f47ce60221084509438ff" exitCode=137 Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.275354 5024 scope.go:117] "RemoveContainer" containerID="1c9833532f8693fe0eeaa9d9225056bdc41aeb73670f47ce60221084509438ff" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.275466 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.295539 5024 scope.go:117] "RemoveContainer" containerID="1c9833532f8693fe0eeaa9d9225056bdc41aeb73670f47ce60221084509438ff" Nov 28 17:03:27 crc kubenswrapper[5024]: E1128 17:03:27.296062 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c9833532f8693fe0eeaa9d9225056bdc41aeb73670f47ce60221084509438ff\": container with ID starting with 1c9833532f8693fe0eeaa9d9225056bdc41aeb73670f47ce60221084509438ff not found: ID does not exist" containerID="1c9833532f8693fe0eeaa9d9225056bdc41aeb73670f47ce60221084509438ff" Nov 28 17:03:27 crc kubenswrapper[5024]: I1128 17:03:27.296098 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9833532f8693fe0eeaa9d9225056bdc41aeb73670f47ce60221084509438ff"} err="failed to get container status \"1c9833532f8693fe0eeaa9d9225056bdc41aeb73670f47ce60221084509438ff\": rpc error: code = NotFound desc = could not find container \"1c9833532f8693fe0eeaa9d9225056bdc41aeb73670f47ce60221084509438ff\": container with ID starting with 1c9833532f8693fe0eeaa9d9225056bdc41aeb73670f47ce60221084509438ff not found: ID does not exist" Nov 28 17:03:28 crc kubenswrapper[5024]: I1128 17:03:28.506319 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 28 17:03:28 crc kubenswrapper[5024]: I1128 17:03:28.506902 5024 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 28 17:03:28 crc kubenswrapper[5024]: I1128 17:03:28.519648 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 17:03:28 crc kubenswrapper[5024]: I1128 17:03:28.519686 5024 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="cd3f7ad1-bf3b-41b7-b2e5-1c51c40520ec" Nov 28 17:03:28 crc kubenswrapper[5024]: I1128 17:03:28.524916 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 17:03:28 crc kubenswrapper[5024]: I1128 17:03:28.524949 5024 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="cd3f7ad1-bf3b-41b7-b2e5-1c51c40520ec" Nov 28 17:03:30 crc kubenswrapper[5024]: I1128 17:03:30.757353 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:30 crc kubenswrapper[5024]: I1128 17:03:30.757409 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:30 crc kubenswrapper[5024]: I1128 17:03:30.758111 5024 scope.go:117] "RemoveContainer" containerID="4ad8c4521a3a67347a858d9c71b9242d56206b0807c5d85948b83f9a9ce512a2" Nov 28 17:03:30 crc kubenswrapper[5024]: E1128 17:03:30.758305 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5795c8b5fb-8lv2s_openshift-authentication(d98336ea-cb72-4207-b7b1-31e7199b819f)\"" pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" podUID="d98336ea-cb72-4207-b7b1-31e7199b819f" Nov 28 17:03:40 crc kubenswrapper[5024]: I1128 17:03:40.419582 5024 generic.go:334] "Generic (PLEG): container finished" podID="80a843cd-6141-431e-83c1-a7ce0110e31f" containerID="213be41ff4da95b7cc71ec5360caf9eb6ff2895cf36d82f7601157b4f203b416" exitCode=0 Nov 28 17:03:40 crc kubenswrapper[5024]: I1128 17:03:40.419708 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" event={"ID":"80a843cd-6141-431e-83c1-a7ce0110e31f","Type":"ContainerDied","Data":"213be41ff4da95b7cc71ec5360caf9eb6ff2895cf36d82f7601157b4f203b416"} Nov 28 17:03:40 crc kubenswrapper[5024]: I1128 17:03:40.420658 5024 scope.go:117] "RemoveContainer" containerID="213be41ff4da95b7cc71ec5360caf9eb6ff2895cf36d82f7601157b4f203b416" Nov 28 17:03:41 crc kubenswrapper[5024]: I1128 17:03:41.427757 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" event={"ID":"80a843cd-6141-431e-83c1-a7ce0110e31f","Type":"ContainerStarted","Data":"476661b4d061905781fdc8d667a57a3ff2d047d92a598bf1c6af70a17d190790"} Nov 28 17:03:41 crc kubenswrapper[5024]: I1128 17:03:41.428661 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:03:41 crc kubenswrapper[5024]: I1128 17:03:41.433006 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:03:43 crc kubenswrapper[5024]: I1128 17:03:43.498536 5024 scope.go:117] "RemoveContainer" containerID="4ad8c4521a3a67347a858d9c71b9242d56206b0807c5d85948b83f9a9ce512a2" Nov 28 17:03:44 crc kubenswrapper[5024]: I1128 17:03:44.450702 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5795c8b5fb-8lv2s_d98336ea-cb72-4207-b7b1-31e7199b819f/oauth-openshift/1.log" Nov 28 17:03:44 crc kubenswrapper[5024]: I1128 17:03:44.450836 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" event={"ID":"d98336ea-cb72-4207-b7b1-31e7199b819f","Type":"ContainerStarted","Data":"47530ce4c15b2f477b45eed51fc2e4c1f946c7a5222800fa1afb2eaf60beaa45"} Nov 28 17:03:44 crc kubenswrapper[5024]: I1128 17:03:44.451399 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:44 crc kubenswrapper[5024]: I1128 17:03:44.458086 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" Nov 28 17:03:44 crc kubenswrapper[5024]: I1128 17:03:44.479522 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5795c8b5fb-8lv2s" podStartSLOduration=69.479496569 podStartE2EDuration="1m9.479496569s" podCreationTimestamp="2025-11-28 17:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:03:44.477599594 +0000 UTC m=+326.526520589" watchObservedRunningTime="2025-11-28 17:03:44.479496569 +0000 UTC m=+326.528417484" Nov 28 17:03:51 crc kubenswrapper[5024]: I1128 17:03:51.740405 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2dsw"] Nov 28 17:03:51 crc kubenswrapper[5024]: I1128 17:03:51.743435 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" podUID="d4cd69fe-add0-427e-a129-cfb9cecb6887" containerName="controller-manager" containerID="cri-o://c5cb7145df6d24810264d348e22eeb89b104a2f7a990c2a2a575aee331d9842b" gracePeriod=30 Nov 28 17:03:51 crc kubenswrapper[5024]: I1128 17:03:51.845520 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg"] Nov 28 17:03:51 crc kubenswrapper[5024]: I1128 17:03:51.845792 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" podUID="ac1db444-6f12-4ac1-943f-b56efdbbb206" containerName="route-controller-manager" containerID="cri-o://f1f323a4020ecb1b2b71d18eacaf442684a86455fc5f0c3f8fa29bc8226ea178" gracePeriod=30 Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.137482 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.158777 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-client-ca\") pod \"d4cd69fe-add0-427e-a129-cfb9cecb6887\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.158848 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-config\") pod \"d4cd69fe-add0-427e-a129-cfb9cecb6887\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.158910 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4cd69fe-add0-427e-a129-cfb9cecb6887-serving-cert\") pod \"d4cd69fe-add0-427e-a129-cfb9cecb6887\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.158970 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-proxy-ca-bundles\") pod \"d4cd69fe-add0-427e-a129-cfb9cecb6887\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.159061 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdjd4\" (UniqueName: \"kubernetes.io/projected/d4cd69fe-add0-427e-a129-cfb9cecb6887-kube-api-access-fdjd4\") pod \"d4cd69fe-add0-427e-a129-cfb9cecb6887\" (UID: \"d4cd69fe-add0-427e-a129-cfb9cecb6887\") " Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.159980 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-client-ca" (OuterVolumeSpecName: "client-ca") pod "d4cd69fe-add0-427e-a129-cfb9cecb6887" (UID: "d4cd69fe-add0-427e-a129-cfb9cecb6887"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.160345 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-config" (OuterVolumeSpecName: "config") pod "d4cd69fe-add0-427e-a129-cfb9cecb6887" (UID: "d4cd69fe-add0-427e-a129-cfb9cecb6887"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.162533 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d4cd69fe-add0-427e-a129-cfb9cecb6887" (UID: "d4cd69fe-add0-427e-a129-cfb9cecb6887"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.168836 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4cd69fe-add0-427e-a129-cfb9cecb6887-kube-api-access-fdjd4" (OuterVolumeSpecName: "kube-api-access-fdjd4") pod "d4cd69fe-add0-427e-a129-cfb9cecb6887" (UID: "d4cd69fe-add0-427e-a129-cfb9cecb6887"). InnerVolumeSpecName "kube-api-access-fdjd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.172008 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cd69fe-add0-427e-a129-cfb9cecb6887-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d4cd69fe-add0-427e-a129-cfb9cecb6887" (UID: "d4cd69fe-add0-427e-a129-cfb9cecb6887"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.191590 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.260257 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac1db444-6f12-4ac1-943f-b56efdbbb206-config\") pod \"ac1db444-6f12-4ac1-943f-b56efdbbb206\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.260383 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1db444-6f12-4ac1-943f-b56efdbbb206-serving-cert\") pod \"ac1db444-6f12-4ac1-943f-b56efdbbb206\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.260451 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zzx5\" (UniqueName: \"kubernetes.io/projected/ac1db444-6f12-4ac1-943f-b56efdbbb206-kube-api-access-7zzx5\") pod \"ac1db444-6f12-4ac1-943f-b56efdbbb206\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.260474 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac1db444-6f12-4ac1-943f-b56efdbbb206-client-ca\") pod \"ac1db444-6f12-4ac1-943f-b56efdbbb206\" (UID: \"ac1db444-6f12-4ac1-943f-b56efdbbb206\") " Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.260795 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdjd4\" (UniqueName: \"kubernetes.io/projected/d4cd69fe-add0-427e-a129-cfb9cecb6887-kube-api-access-fdjd4\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.260814 5024 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.260824 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.260837 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4cd69fe-add0-427e-a129-cfb9cecb6887-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.260846 5024 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4cd69fe-add0-427e-a129-cfb9cecb6887-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.261710 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1db444-6f12-4ac1-943f-b56efdbbb206-config" (OuterVolumeSpecName: "config") pod "ac1db444-6f12-4ac1-943f-b56efdbbb206" (UID: "ac1db444-6f12-4ac1-943f-b56efdbbb206"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.261969 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1db444-6f12-4ac1-943f-b56efdbbb206-client-ca" (OuterVolumeSpecName: "client-ca") pod "ac1db444-6f12-4ac1-943f-b56efdbbb206" (UID: "ac1db444-6f12-4ac1-943f-b56efdbbb206"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.266296 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1db444-6f12-4ac1-943f-b56efdbbb206-kube-api-access-7zzx5" (OuterVolumeSpecName: "kube-api-access-7zzx5") pod "ac1db444-6f12-4ac1-943f-b56efdbbb206" (UID: "ac1db444-6f12-4ac1-943f-b56efdbbb206"). InnerVolumeSpecName "kube-api-access-7zzx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.266443 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1db444-6f12-4ac1-943f-b56efdbbb206-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ac1db444-6f12-4ac1-943f-b56efdbbb206" (UID: "ac1db444-6f12-4ac1-943f-b56efdbbb206"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.362105 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac1db444-6f12-4ac1-943f-b56efdbbb206-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.362693 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1db444-6f12-4ac1-943f-b56efdbbb206-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.362715 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zzx5\" (UniqueName: \"kubernetes.io/projected/ac1db444-6f12-4ac1-943f-b56efdbbb206-kube-api-access-7zzx5\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.362731 5024 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac1db444-6f12-4ac1-943f-b56efdbbb206-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.516372 5024 generic.go:334] "Generic (PLEG): container finished" podID="d4cd69fe-add0-427e-a129-cfb9cecb6887" containerID="c5cb7145df6d24810264d348e22eeb89b104a2f7a990c2a2a575aee331d9842b" exitCode=0 Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.516476 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" event={"ID":"d4cd69fe-add0-427e-a129-cfb9cecb6887","Type":"ContainerDied","Data":"c5cb7145df6d24810264d348e22eeb89b104a2f7a990c2a2a575aee331d9842b"} Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.516506 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.516548 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v2dsw" event={"ID":"d4cd69fe-add0-427e-a129-cfb9cecb6887","Type":"ContainerDied","Data":"a91fb40398c7bb7a1428b49790f94bd0384112309052091ab6c5b908aa35e54b"} Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.516573 5024 scope.go:117] "RemoveContainer" containerID="c5cb7145df6d24810264d348e22eeb89b104a2f7a990c2a2a575aee331d9842b" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.521108 5024 generic.go:334] "Generic (PLEG): container finished" podID="ac1db444-6f12-4ac1-943f-b56efdbbb206" containerID="f1f323a4020ecb1b2b71d18eacaf442684a86455fc5f0c3f8fa29bc8226ea178" exitCode=0 Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.521171 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" event={"ID":"ac1db444-6f12-4ac1-943f-b56efdbbb206","Type":"ContainerDied","Data":"f1f323a4020ecb1b2b71d18eacaf442684a86455fc5f0c3f8fa29bc8226ea178"} Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.521406 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" event={"ID":"ac1db444-6f12-4ac1-943f-b56efdbbb206","Type":"ContainerDied","Data":"14eb5f711c3596f9f888f0e9f57a69403a3fa16e39f05c8f63859b603b5f3efd"} Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.521236 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.541179 5024 scope.go:117] "RemoveContainer" containerID="c5cb7145df6d24810264d348e22eeb89b104a2f7a990c2a2a575aee331d9842b" Nov 28 17:03:52 crc kubenswrapper[5024]: E1128 17:03:52.541679 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5cb7145df6d24810264d348e22eeb89b104a2f7a990c2a2a575aee331d9842b\": container with ID starting with c5cb7145df6d24810264d348e22eeb89b104a2f7a990c2a2a575aee331d9842b not found: ID does not exist" containerID="c5cb7145df6d24810264d348e22eeb89b104a2f7a990c2a2a575aee331d9842b" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.541719 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5cb7145df6d24810264d348e22eeb89b104a2f7a990c2a2a575aee331d9842b"} err="failed to get container status \"c5cb7145df6d24810264d348e22eeb89b104a2f7a990c2a2a575aee331d9842b\": rpc error: code = NotFound desc = could not find container \"c5cb7145df6d24810264d348e22eeb89b104a2f7a990c2a2a575aee331d9842b\": container with ID starting with c5cb7145df6d24810264d348e22eeb89b104a2f7a990c2a2a575aee331d9842b not found: ID does not exist" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.541747 5024 scope.go:117] "RemoveContainer" containerID="f1f323a4020ecb1b2b71d18eacaf442684a86455fc5f0c3f8fa29bc8226ea178" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.563499 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg"] Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.565294 5024 scope.go:117] "RemoveContainer" containerID="f1f323a4020ecb1b2b71d18eacaf442684a86455fc5f0c3f8fa29bc8226ea178" Nov 28 17:03:52 crc kubenswrapper[5024]: E1128 17:03:52.566585 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1f323a4020ecb1b2b71d18eacaf442684a86455fc5f0c3f8fa29bc8226ea178\": container with ID starting with f1f323a4020ecb1b2b71d18eacaf442684a86455fc5f0c3f8fa29bc8226ea178 not found: ID does not exist" containerID="f1f323a4020ecb1b2b71d18eacaf442684a86455fc5f0c3f8fa29bc8226ea178" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.566619 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1f323a4020ecb1b2b71d18eacaf442684a86455fc5f0c3f8fa29bc8226ea178"} err="failed to get container status \"f1f323a4020ecb1b2b71d18eacaf442684a86455fc5f0c3f8fa29bc8226ea178\": rpc error: code = NotFound desc = could not find container \"f1f323a4020ecb1b2b71d18eacaf442684a86455fc5f0c3f8fa29bc8226ea178\": container with ID starting with f1f323a4020ecb1b2b71d18eacaf442684a86455fc5f0c3f8fa29bc8226ea178 not found: ID does not exist" Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.569467 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qnhjg"] Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.582564 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2dsw"] Nov 28 17:03:52 crc kubenswrapper[5024]: I1128 17:03:52.589854 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2dsw"] Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.030676 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f"] Nov 28 17:03:53 crc kubenswrapper[5024]: E1128 17:03:53.031228 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cd69fe-add0-427e-a129-cfb9cecb6887" containerName="controller-manager" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.031256 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cd69fe-add0-427e-a129-cfb9cecb6887" containerName="controller-manager" Nov 28 17:03:53 crc kubenswrapper[5024]: E1128 17:03:53.031280 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.031292 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 17:03:53 crc kubenswrapper[5024]: E1128 17:03:53.031312 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1db444-6f12-4ac1-943f-b56efdbbb206" containerName="route-controller-manager" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.031325 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1db444-6f12-4ac1-943f-b56efdbbb206" containerName="route-controller-manager" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.031540 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1db444-6f12-4ac1-943f-b56efdbbb206" containerName="route-controller-manager" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.031566 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4cd69fe-add0-427e-a129-cfb9cecb6887" containerName="controller-manager" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.031584 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.032355 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.033003 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22"] Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.033953 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.035743 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.035852 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.036048 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.036289 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.036531 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.036806 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.038882 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.039151 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.039296 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.039286 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.039576 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.040053 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.047862 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f"] Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.059765 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22"] Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.062487 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.071935 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-client-ca\") pod \"route-controller-manager-64fb8948d5-58d22\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.071992 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-client-ca\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.072138 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-proxy-ca-bundles\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.072177 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jd5b\" (UniqueName: \"kubernetes.io/projected/24ae04d5-4230-4a0a-b787-11296e73d0f6-kube-api-access-7jd5b\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.072242 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-config\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.072266 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-serving-cert\") pod \"route-controller-manager-64fb8948d5-58d22\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.072324 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-config\") pod \"route-controller-manager-64fb8948d5-58d22\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.072347 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb7sj\" (UniqueName: \"kubernetes.io/projected/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-kube-api-access-qb7sj\") pod \"route-controller-manager-64fb8948d5-58d22\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.072376 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24ae04d5-4230-4a0a-b787-11296e73d0f6-serving-cert\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.173682 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-config\") pod \"route-controller-manager-64fb8948d5-58d22\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.173737 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb7sj\" (UniqueName: \"kubernetes.io/projected/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-kube-api-access-qb7sj\") pod \"route-controller-manager-64fb8948d5-58d22\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.173790 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24ae04d5-4230-4a0a-b787-11296e73d0f6-serving-cert\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.173837 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-client-ca\") pod \"route-controller-manager-64fb8948d5-58d22\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.173871 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-client-ca\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.173898 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-proxy-ca-bundles\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.173936 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jd5b\" (UniqueName: \"kubernetes.io/projected/24ae04d5-4230-4a0a-b787-11296e73d0f6-kube-api-access-7jd5b\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.173984 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-config\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.174009 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-serving-cert\") pod \"route-controller-manager-64fb8948d5-58d22\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.175379 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-proxy-ca-bundles\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.175379 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-client-ca\") pod \"route-controller-manager-64fb8948d5-58d22\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.176133 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-client-ca\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.176644 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-config\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.179036 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-config\") pod \"route-controller-manager-64fb8948d5-58d22\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.179815 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24ae04d5-4230-4a0a-b787-11296e73d0f6-serving-cert\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.189212 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-serving-cert\") pod \"route-controller-manager-64fb8948d5-58d22\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.195123 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb7sj\" (UniqueName: \"kubernetes.io/projected/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-kube-api-access-qb7sj\") pod \"route-controller-manager-64fb8948d5-58d22\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.195252 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jd5b\" (UniqueName: \"kubernetes.io/projected/24ae04d5-4230-4a0a-b787-11296e73d0f6-kube-api-access-7jd5b\") pod \"controller-manager-5699d7d6f5-xjb9f\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.370729 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.385050 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.622740 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22"] Nov 28 17:03:53 crc kubenswrapper[5024]: W1128 17:03:53.627720 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode16ea0fe_24aa_4ced_82fa_31027ef9f7b1.slice/crio-657bde87e6bf548095ed8c318eff4a2e41f3172d97d0979acfa728fca98a2b28 WatchSource:0}: Error finding container 657bde87e6bf548095ed8c318eff4a2e41f3172d97d0979acfa728fca98a2b28: Status 404 returned error can't find the container with id 657bde87e6bf548095ed8c318eff4a2e41f3172d97d0979acfa728fca98a2b28 Nov 28 17:03:53 crc kubenswrapper[5024]: I1128 17:03:53.889954 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f"] Nov 28 17:03:54 crc kubenswrapper[5024]: I1128 17:03:54.506072 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac1db444-6f12-4ac1-943f-b56efdbbb206" path="/var/lib/kubelet/pods/ac1db444-6f12-4ac1-943f-b56efdbbb206/volumes" Nov 28 17:03:54 crc kubenswrapper[5024]: I1128 17:03:54.507114 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4cd69fe-add0-427e-a129-cfb9cecb6887" path="/var/lib/kubelet/pods/d4cd69fe-add0-427e-a129-cfb9cecb6887/volumes" Nov 28 17:03:54 crc kubenswrapper[5024]: I1128 17:03:54.541248 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" event={"ID":"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1","Type":"ContainerStarted","Data":"8b2349e14c629edc297e83531e58ab8a743197d6b23ee4f3c611749a415000d1"} Nov 28 17:03:54 crc kubenswrapper[5024]: I1128 17:03:54.541346 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" event={"ID":"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1","Type":"ContainerStarted","Data":"657bde87e6bf548095ed8c318eff4a2e41f3172d97d0979acfa728fca98a2b28"} Nov 28 17:03:54 crc kubenswrapper[5024]: I1128 17:03:54.541818 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:54 crc kubenswrapper[5024]: I1128 17:03:54.543895 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" event={"ID":"24ae04d5-4230-4a0a-b787-11296e73d0f6","Type":"ContainerStarted","Data":"a7f2bf718a414d51914da28f36957b9495bfdd5d4356f898a6e10ef873079b19"} Nov 28 17:03:54 crc kubenswrapper[5024]: I1128 17:03:54.543957 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" event={"ID":"24ae04d5-4230-4a0a-b787-11296e73d0f6","Type":"ContainerStarted","Data":"9fea3e2fc0b5ddaa3cec1808b92b4a0655f1d352336bf75d9e6d9c56303134b7"} Nov 28 17:03:54 crc kubenswrapper[5024]: I1128 17:03:54.544237 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:54 crc kubenswrapper[5024]: I1128 17:03:54.549873 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:03:54 crc kubenswrapper[5024]: I1128 17:03:54.550259 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:03:54 crc kubenswrapper[5024]: I1128 17:03:54.565035 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" podStartSLOduration=3.564998003 podStartE2EDuration="3.564998003s" podCreationTimestamp="2025-11-28 17:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:03:54.564435063 +0000 UTC m=+336.613355988" watchObservedRunningTime="2025-11-28 17:03:54.564998003 +0000 UTC m=+336.613918908" Nov 28 17:03:54 crc kubenswrapper[5024]: I1128 17:03:54.586767 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" podStartSLOduration=3.586740175 podStartE2EDuration="3.586740175s" podCreationTimestamp="2025-11-28 17:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:03:54.584445864 +0000 UTC m=+336.633366769" watchObservedRunningTime="2025-11-28 17:03:54.586740175 +0000 UTC m=+336.635661080" Nov 28 17:04:21 crc kubenswrapper[5024]: I1128 17:04:21.889581 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f"] Nov 28 17:04:21 crc kubenswrapper[5024]: I1128 17:04:21.890483 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" podUID="24ae04d5-4230-4a0a-b787-11296e73d0f6" containerName="controller-manager" containerID="cri-o://a7f2bf718a414d51914da28f36957b9495bfdd5d4356f898a6e10ef873079b19" gracePeriod=30 Nov 28 17:04:21 crc kubenswrapper[5024]: I1128 17:04:21.976420 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22"] Nov 28 17:04:21 crc kubenswrapper[5024]: I1128 17:04:21.976729 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" podUID="e16ea0fe-24aa-4ced-82fa-31027ef9f7b1" containerName="route-controller-manager" containerID="cri-o://8b2349e14c629edc297e83531e58ab8a743197d6b23ee4f3c611749a415000d1" gracePeriod=30 Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.371739 5024 patch_prober.go:28] interesting pod/controller-manager-5699d7d6f5-xjb9f container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.372412 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" podUID="24ae04d5-4230-4a0a-b787-11296e73d0f6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.386603 5024 patch_prober.go:28] interesting pod/route-controller-manager-64fb8948d5-58d22 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.386688 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" podUID="e16ea0fe-24aa-4ced-82fa-31027ef9f7b1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.743573 5024 generic.go:334] "Generic (PLEG): container finished" podID="e16ea0fe-24aa-4ced-82fa-31027ef9f7b1" containerID="8b2349e14c629edc297e83531e58ab8a743197d6b23ee4f3c611749a415000d1" exitCode=0 Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.743671 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" event={"ID":"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1","Type":"ContainerDied","Data":"8b2349e14c629edc297e83531e58ab8a743197d6b23ee4f3c611749a415000d1"} Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.747323 5024 generic.go:334] "Generic (PLEG): container finished" podID="24ae04d5-4230-4a0a-b787-11296e73d0f6" containerID="a7f2bf718a414d51914da28f36957b9495bfdd5d4356f898a6e10ef873079b19" exitCode=0 Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.747386 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" event={"ID":"24ae04d5-4230-4a0a-b787-11296e73d0f6","Type":"ContainerDied","Data":"a7f2bf718a414d51914da28f36957b9495bfdd5d4356f898a6e10ef873079b19"} Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.811538 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.847103 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw"] Nov 28 17:04:23 crc kubenswrapper[5024]: E1128 17:04:23.847360 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16ea0fe-24aa-4ced-82fa-31027ef9f7b1" containerName="route-controller-manager" Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.847377 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16ea0fe-24aa-4ced-82fa-31027ef9f7b1" containerName="route-controller-manager" Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.847500 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16ea0fe-24aa-4ced-82fa-31027ef9f7b1" containerName="route-controller-manager" Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.849301 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.907402 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw"] Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.925700 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb7sj\" (UniqueName: \"kubernetes.io/projected/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-kube-api-access-qb7sj\") pod \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.925786 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-serving-cert\") pod \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.925855 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-config\") pod \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.925948 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-client-ca\") pod \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\" (UID: \"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1\") " Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.930885 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-config" (OuterVolumeSpecName: "config") pod "e16ea0fe-24aa-4ced-82fa-31027ef9f7b1" (UID: "e16ea0fe-24aa-4ced-82fa-31027ef9f7b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.934279 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "e16ea0fe-24aa-4ced-82fa-31027ef9f7b1" (UID: "e16ea0fe-24aa-4ced-82fa-31027ef9f7b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.937185 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-kube-api-access-qb7sj" (OuterVolumeSpecName: "kube-api-access-qb7sj") pod "e16ea0fe-24aa-4ced-82fa-31027ef9f7b1" (UID: "e16ea0fe-24aa-4ced-82fa-31027ef9f7b1"). InnerVolumeSpecName "kube-api-access-qb7sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.937998 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e16ea0fe-24aa-4ced-82fa-31027ef9f7b1" (UID: "e16ea0fe-24aa-4ced-82fa-31027ef9f7b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:04:23 crc kubenswrapper[5024]: I1128 17:04:23.963003 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.028384 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f064e36-9767-4893-96e2-8a832547d8b6-config\") pod \"route-controller-manager-df78c4dbc-ckdbw\" (UID: \"5f064e36-9767-4893-96e2-8a832547d8b6\") " pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.028441 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f064e36-9767-4893-96e2-8a832547d8b6-client-ca\") pod \"route-controller-manager-df78c4dbc-ckdbw\" (UID: \"5f064e36-9767-4893-96e2-8a832547d8b6\") " pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.028484 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f064e36-9767-4893-96e2-8a832547d8b6-serving-cert\") pod \"route-controller-manager-df78c4dbc-ckdbw\" (UID: \"5f064e36-9767-4893-96e2-8a832547d8b6\") " pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.028502 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbd7q\" (UniqueName: \"kubernetes.io/projected/5f064e36-9767-4893-96e2-8a832547d8b6-kube-api-access-lbd7q\") pod \"route-controller-manager-df78c4dbc-ckdbw\" (UID: \"5f064e36-9767-4893-96e2-8a832547d8b6\") " pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.028581 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb7sj\" (UniqueName: \"kubernetes.io/projected/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-kube-api-access-qb7sj\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.028595 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.028605 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.028616 5024 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.130333 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-client-ca\") pod \"24ae04d5-4230-4a0a-b787-11296e73d0f6\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.130877 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jd5b\" (UniqueName: \"kubernetes.io/projected/24ae04d5-4230-4a0a-b787-11296e73d0f6-kube-api-access-7jd5b\") pod \"24ae04d5-4230-4a0a-b787-11296e73d0f6\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.130958 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-config\") pod \"24ae04d5-4230-4a0a-b787-11296e73d0f6\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.130994 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24ae04d5-4230-4a0a-b787-11296e73d0f6-serving-cert\") pod \"24ae04d5-4230-4a0a-b787-11296e73d0f6\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.131082 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-proxy-ca-bundles\") pod \"24ae04d5-4230-4a0a-b787-11296e73d0f6\" (UID: \"24ae04d5-4230-4a0a-b787-11296e73d0f6\") " Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.131418 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f064e36-9767-4893-96e2-8a832547d8b6-serving-cert\") pod \"route-controller-manager-df78c4dbc-ckdbw\" (UID: \"5f064e36-9767-4893-96e2-8a832547d8b6\") " pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.131465 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbd7q\" (UniqueName: \"kubernetes.io/projected/5f064e36-9767-4893-96e2-8a832547d8b6-kube-api-access-lbd7q\") pod \"route-controller-manager-df78c4dbc-ckdbw\" (UID: \"5f064e36-9767-4893-96e2-8a832547d8b6\") " pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.131621 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f064e36-9767-4893-96e2-8a832547d8b6-config\") pod \"route-controller-manager-df78c4dbc-ckdbw\" (UID: \"5f064e36-9767-4893-96e2-8a832547d8b6\") " pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.131641 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-client-ca" (OuterVolumeSpecName: "client-ca") pod "24ae04d5-4230-4a0a-b787-11296e73d0f6" (UID: "24ae04d5-4230-4a0a-b787-11296e73d0f6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.131676 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f064e36-9767-4893-96e2-8a832547d8b6-client-ca\") pod \"route-controller-manager-df78c4dbc-ckdbw\" (UID: \"5f064e36-9767-4893-96e2-8a832547d8b6\") " pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.131751 5024 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.131813 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "24ae04d5-4230-4a0a-b787-11296e73d0f6" (UID: "24ae04d5-4230-4a0a-b787-11296e73d0f6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.132319 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-config" (OuterVolumeSpecName: "config") pod "24ae04d5-4230-4a0a-b787-11296e73d0f6" (UID: "24ae04d5-4230-4a0a-b787-11296e73d0f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.133201 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f064e36-9767-4893-96e2-8a832547d8b6-config\") pod \"route-controller-manager-df78c4dbc-ckdbw\" (UID: \"5f064e36-9767-4893-96e2-8a832547d8b6\") " pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.133594 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f064e36-9767-4893-96e2-8a832547d8b6-client-ca\") pod \"route-controller-manager-df78c4dbc-ckdbw\" (UID: \"5f064e36-9767-4893-96e2-8a832547d8b6\") " pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.135232 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f064e36-9767-4893-96e2-8a832547d8b6-serving-cert\") pod \"route-controller-manager-df78c4dbc-ckdbw\" (UID: \"5f064e36-9767-4893-96e2-8a832547d8b6\") " pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.136013 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ae04d5-4230-4a0a-b787-11296e73d0f6-kube-api-access-7jd5b" (OuterVolumeSpecName: "kube-api-access-7jd5b") pod "24ae04d5-4230-4a0a-b787-11296e73d0f6" (UID: "24ae04d5-4230-4a0a-b787-11296e73d0f6"). InnerVolumeSpecName "kube-api-access-7jd5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.136253 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ae04d5-4230-4a0a-b787-11296e73d0f6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "24ae04d5-4230-4a0a-b787-11296e73d0f6" (UID: "24ae04d5-4230-4a0a-b787-11296e73d0f6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.150343 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbd7q\" (UniqueName: \"kubernetes.io/projected/5f064e36-9767-4893-96e2-8a832547d8b6-kube-api-access-lbd7q\") pod \"route-controller-manager-df78c4dbc-ckdbw\" (UID: \"5f064e36-9767-4893-96e2-8a832547d8b6\") " pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.184717 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.234172 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jd5b\" (UniqueName: \"kubernetes.io/projected/24ae04d5-4230-4a0a-b787-11296e73d0f6-kube-api-access-7jd5b\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.234213 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.234228 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24ae04d5-4230-4a0a-b787-11296e73d0f6-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.234241 5024 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24ae04d5-4230-4a0a-b787-11296e73d0f6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.405749 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw"] Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.756790 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" event={"ID":"e16ea0fe-24aa-4ced-82fa-31027ef9f7b1","Type":"ContainerDied","Data":"657bde87e6bf548095ed8c318eff4a2e41f3172d97d0979acfa728fca98a2b28"} Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.756917 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.757297 5024 scope.go:117] "RemoveContainer" containerID="8b2349e14c629edc297e83531e58ab8a743197d6b23ee4f3c611749a415000d1" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.759000 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" event={"ID":"24ae04d5-4230-4a0a-b787-11296e73d0f6","Type":"ContainerDied","Data":"9fea3e2fc0b5ddaa3cec1808b92b4a0655f1d352336bf75d9e6d9c56303134b7"} Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.759060 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.760674 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" event={"ID":"5f064e36-9767-4893-96e2-8a832547d8b6","Type":"ContainerStarted","Data":"e2bceb250f718efd2bf1f06dc30b61a9264028798330a94babf0af07a9712977"} Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.814223 5024 scope.go:117] "RemoveContainer" containerID="a7f2bf718a414d51914da28f36957b9495bfdd5d4356f898a6e10ef873079b19" Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.818273 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22"] Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.827487 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64fb8948d5-58d22"] Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.833532 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f"] Nov 28 17:04:24 crc kubenswrapper[5024]: I1128 17:04:24.839675 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5699d7d6f5-xjb9f"] Nov 28 17:04:25 crc kubenswrapper[5024]: I1128 17:04:25.771964 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" event={"ID":"5f064e36-9767-4893-96e2-8a832547d8b6","Type":"ContainerStarted","Data":"fc17b09cc268d3679c27a1cde9484eef3af3edb3431c55b2336c4aa8113d778d"} Nov 28 17:04:25 crc kubenswrapper[5024]: I1128 17:04:25.772352 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:25 crc kubenswrapper[5024]: I1128 17:04:25.781561 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" Nov 28 17:04:25 crc kubenswrapper[5024]: I1128 17:04:25.795695 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-df78c4dbc-ckdbw" podStartSLOduration=3.7956753819999998 podStartE2EDuration="3.795675382s" podCreationTimestamp="2025-11-28 17:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:04:25.792683546 +0000 UTC m=+367.841604441" watchObservedRunningTime="2025-11-28 17:04:25.795675382 +0000 UTC m=+367.844596287" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.050008 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7ff9c64758-jx2vh"] Nov 28 17:04:26 crc kubenswrapper[5024]: E1128 17:04:26.050435 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ae04d5-4230-4a0a-b787-11296e73d0f6" containerName="controller-manager" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.050462 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ae04d5-4230-4a0a-b787-11296e73d0f6" containerName="controller-manager" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.050603 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ae04d5-4230-4a0a-b787-11296e73d0f6" containerName="controller-manager" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.051134 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.055206 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.055610 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.055637 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.055691 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.055790 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.055811 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.062098 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.066160 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7ff9c64758-jx2vh"] Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.160660 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-proxy-ca-bundles\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.160753 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-config\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.160876 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-client-ca\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.160954 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eb80018-6d46-4b85-bb09-719f7f8848e5-serving-cert\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.160987 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk4v6\" (UniqueName: \"kubernetes.io/projected/1eb80018-6d46-4b85-bb09-719f7f8848e5-kube-api-access-qk4v6\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.261957 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eb80018-6d46-4b85-bb09-719f7f8848e5-serving-cert\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.262009 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk4v6\" (UniqueName: \"kubernetes.io/projected/1eb80018-6d46-4b85-bb09-719f7f8848e5-kube-api-access-qk4v6\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.262066 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-proxy-ca-bundles\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.262107 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-config\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.262134 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-client-ca\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.263257 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-client-ca\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.263813 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-config\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.263920 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-proxy-ca-bundles\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.269398 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eb80018-6d46-4b85-bb09-719f7f8848e5-serving-cert\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.285772 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk4v6\" (UniqueName: \"kubernetes.io/projected/1eb80018-6d46-4b85-bb09-719f7f8848e5-kube-api-access-qk4v6\") pod \"controller-manager-7ff9c64758-jx2vh\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.379168 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.506859 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ae04d5-4230-4a0a-b787-11296e73d0f6" path="/var/lib/kubelet/pods/24ae04d5-4230-4a0a-b787-11296e73d0f6/volumes" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.507862 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16ea0fe-24aa-4ced-82fa-31027ef9f7b1" path="/var/lib/kubelet/pods/e16ea0fe-24aa-4ced-82fa-31027ef9f7b1/volumes" Nov 28 17:04:26 crc kubenswrapper[5024]: I1128 17:04:26.847103 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7ff9c64758-jx2vh"] Nov 28 17:04:26 crc kubenswrapper[5024]: W1128 17:04:26.853636 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1eb80018_6d46_4b85_bb09_719f7f8848e5.slice/crio-35e539edffd4befaf58ddbd82f10b42e2fa27c08291cd4ee6812953680aa1520 WatchSource:0}: Error finding container 35e539edffd4befaf58ddbd82f10b42e2fa27c08291cd4ee6812953680aa1520: Status 404 returned error can't find the container with id 35e539edffd4befaf58ddbd82f10b42e2fa27c08291cd4ee6812953680aa1520 Nov 28 17:04:27 crc kubenswrapper[5024]: I1128 17:04:27.788665 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" event={"ID":"1eb80018-6d46-4b85-bb09-719f7f8848e5","Type":"ContainerStarted","Data":"c5edb50b4eeb27f422303229c0f6fbd014be12154f17df46cbfff842ef1407bb"} Nov 28 17:04:27 crc kubenswrapper[5024]: I1128 17:04:27.789139 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:27 crc kubenswrapper[5024]: I1128 17:04:27.789163 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" event={"ID":"1eb80018-6d46-4b85-bb09-719f7f8848e5","Type":"ContainerStarted","Data":"35e539edffd4befaf58ddbd82f10b42e2fa27c08291cd4ee6812953680aa1520"} Nov 28 17:04:27 crc kubenswrapper[5024]: I1128 17:04:27.796507 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:27 crc kubenswrapper[5024]: I1128 17:04:27.815082 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" podStartSLOduration=5.815061482 podStartE2EDuration="5.815061482s" podCreationTimestamp="2025-11-28 17:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:04:27.814622097 +0000 UTC m=+369.863543012" watchObservedRunningTime="2025-11-28 17:04:27.815061482 +0000 UTC m=+369.863982387" Nov 28 17:04:37 crc kubenswrapper[5024]: I1128 17:04:37.564819 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:04:37 crc kubenswrapper[5024]: I1128 17:04:37.565861 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:04:39 crc kubenswrapper[5024]: I1128 17:04:39.921693 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7ff9c64758-jx2vh"] Nov 28 17:04:39 crc kubenswrapper[5024]: I1128 17:04:39.922113 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" podUID="1eb80018-6d46-4b85-bb09-719f7f8848e5" containerName="controller-manager" containerID="cri-o://c5edb50b4eeb27f422303229c0f6fbd014be12154f17df46cbfff842ef1407bb" gracePeriod=30 Nov 28 17:04:40 crc kubenswrapper[5024]: I1128 17:04:40.181629 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j64mb"] Nov 28 17:04:40 crc kubenswrapper[5024]: I1128 17:04:40.182413 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j64mb" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" containerName="registry-server" containerID="cri-o://576af8cb21cd7732d93f767a0937b3987e9d629196b3a5dca1628a39588d29a5" gracePeriod=2 Nov 28 17:04:40 crc kubenswrapper[5024]: I1128 17:04:40.918056 5024 generic.go:334] "Generic (PLEG): container finished" podID="f10908eb-32ed-4e49-b1ea-7b627343b29d" containerID="576af8cb21cd7732d93f767a0937b3987e9d629196b3a5dca1628a39588d29a5" exitCode=0 Nov 28 17:04:40 crc kubenswrapper[5024]: I1128 17:04:40.918144 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j64mb" event={"ID":"f10908eb-32ed-4e49-b1ea-7b627343b29d","Type":"ContainerDied","Data":"576af8cb21cd7732d93f767a0937b3987e9d629196b3a5dca1628a39588d29a5"} Nov 28 17:04:40 crc kubenswrapper[5024]: I1128 17:04:40.920140 5024 generic.go:334] "Generic (PLEG): container finished" podID="1eb80018-6d46-4b85-bb09-719f7f8848e5" containerID="c5edb50b4eeb27f422303229c0f6fbd014be12154f17df46cbfff842ef1407bb" exitCode=0 Nov 28 17:04:40 crc kubenswrapper[5024]: I1128 17:04:40.920167 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" event={"ID":"1eb80018-6d46-4b85-bb09-719f7f8848e5","Type":"ContainerDied","Data":"c5edb50b4eeb27f422303229c0f6fbd014be12154f17df46cbfff842ef1407bb"} Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.075752 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.142818 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54847d48c6-bfdb4"] Nov 28 17:04:41 crc kubenswrapper[5024]: E1128 17:04:41.143201 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb80018-6d46-4b85-bb09-719f7f8848e5" containerName="controller-manager" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.143219 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb80018-6d46-4b85-bb09-719f7f8848e5" containerName="controller-manager" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.143361 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb80018-6d46-4b85-bb09-719f7f8848e5" containerName="controller-manager" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.143901 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.147893 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54847d48c6-bfdb4"] Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.164191 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk4v6\" (UniqueName: \"kubernetes.io/projected/1eb80018-6d46-4b85-bb09-719f7f8848e5-kube-api-access-qk4v6\") pod \"1eb80018-6d46-4b85-bb09-719f7f8848e5\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.164278 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-config\") pod \"1eb80018-6d46-4b85-bb09-719f7f8848e5\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.164336 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-proxy-ca-bundles\") pod \"1eb80018-6d46-4b85-bb09-719f7f8848e5\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.164364 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eb80018-6d46-4b85-bb09-719f7f8848e5-serving-cert\") pod \"1eb80018-6d46-4b85-bb09-719f7f8848e5\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.164458 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-client-ca\") pod \"1eb80018-6d46-4b85-bb09-719f7f8848e5\" (UID: \"1eb80018-6d46-4b85-bb09-719f7f8848e5\") " Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.168182 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1eb80018-6d46-4b85-bb09-719f7f8848e5" (UID: "1eb80018-6d46-4b85-bb09-719f7f8848e5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.168259 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-client-ca" (OuterVolumeSpecName: "client-ca") pod "1eb80018-6d46-4b85-bb09-719f7f8848e5" (UID: "1eb80018-6d46-4b85-bb09-719f7f8848e5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.168747 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-config" (OuterVolumeSpecName: "config") pod "1eb80018-6d46-4b85-bb09-719f7f8848e5" (UID: "1eb80018-6d46-4b85-bb09-719f7f8848e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.175635 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb80018-6d46-4b85-bb09-719f7f8848e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1eb80018-6d46-4b85-bb09-719f7f8848e5" (UID: "1eb80018-6d46-4b85-bb09-719f7f8848e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.176890 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb80018-6d46-4b85-bb09-719f7f8848e5-kube-api-access-qk4v6" (OuterVolumeSpecName: "kube-api-access-qk4v6") pod "1eb80018-6d46-4b85-bb09-719f7f8848e5" (UID: "1eb80018-6d46-4b85-bb09-719f7f8848e5"). InnerVolumeSpecName "kube-api-access-qk4v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.230031 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.265844 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqhx8\" (UniqueName: \"kubernetes.io/projected/f10908eb-32ed-4e49-b1ea-7b627343b29d-kube-api-access-cqhx8\") pod \"f10908eb-32ed-4e49-b1ea-7b627343b29d\" (UID: \"f10908eb-32ed-4e49-b1ea-7b627343b29d\") " Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.265939 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f10908eb-32ed-4e49-b1ea-7b627343b29d-catalog-content\") pod \"f10908eb-32ed-4e49-b1ea-7b627343b29d\" (UID: \"f10908eb-32ed-4e49-b1ea-7b627343b29d\") " Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.266069 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f10908eb-32ed-4e49-b1ea-7b627343b29d-utilities\") pod \"f10908eb-32ed-4e49-b1ea-7b627343b29d\" (UID: \"f10908eb-32ed-4e49-b1ea-7b627343b29d\") " Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.266529 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e58b0d-aafa-4b3c-ba90-be0db225a246-config\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.266617 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4e58b0d-aafa-4b3c-ba90-be0db225a246-proxy-ca-bundles\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.266824 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e58b0d-aafa-4b3c-ba90-be0db225a246-serving-cert\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.266936 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e58b0d-aafa-4b3c-ba90-be0db225a246-client-ca\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.267188 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn2x9\" (UniqueName: \"kubernetes.io/projected/d4e58b0d-aafa-4b3c-ba90-be0db225a246-kube-api-access-vn2x9\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.267403 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10908eb-32ed-4e49-b1ea-7b627343b29d-utilities" (OuterVolumeSpecName: "utilities") pod "f10908eb-32ed-4e49-b1ea-7b627343b29d" (UID: "f10908eb-32ed-4e49-b1ea-7b627343b29d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.267587 5024 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.267615 5024 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1eb80018-6d46-4b85-bb09-719f7f8848e5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.267628 5024 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.267641 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk4v6\" (UniqueName: \"kubernetes.io/projected/1eb80018-6d46-4b85-bb09-719f7f8848e5-kube-api-access-qk4v6\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.267655 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f10908eb-32ed-4e49-b1ea-7b627343b29d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.267667 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb80018-6d46-4b85-bb09-719f7f8848e5-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.271014 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10908eb-32ed-4e49-b1ea-7b627343b29d-kube-api-access-cqhx8" (OuterVolumeSpecName: "kube-api-access-cqhx8") pod "f10908eb-32ed-4e49-b1ea-7b627343b29d" (UID: "f10908eb-32ed-4e49-b1ea-7b627343b29d"). InnerVolumeSpecName "kube-api-access-cqhx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.321423 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10908eb-32ed-4e49-b1ea-7b627343b29d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f10908eb-32ed-4e49-b1ea-7b627343b29d" (UID: "f10908eb-32ed-4e49-b1ea-7b627343b29d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.369503 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e58b0d-aafa-4b3c-ba90-be0db225a246-config\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.369557 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4e58b0d-aafa-4b3c-ba90-be0db225a246-proxy-ca-bundles\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.369586 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e58b0d-aafa-4b3c-ba90-be0db225a246-serving-cert\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.369614 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e58b0d-aafa-4b3c-ba90-be0db225a246-client-ca\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.369649 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn2x9\" (UniqueName: \"kubernetes.io/projected/d4e58b0d-aafa-4b3c-ba90-be0db225a246-kube-api-access-vn2x9\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.369701 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqhx8\" (UniqueName: \"kubernetes.io/projected/f10908eb-32ed-4e49-b1ea-7b627343b29d-kube-api-access-cqhx8\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.369756 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f10908eb-32ed-4e49-b1ea-7b627343b29d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.371172 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e58b0d-aafa-4b3c-ba90-be0db225a246-client-ca\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.371737 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e58b0d-aafa-4b3c-ba90-be0db225a246-config\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.372442 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4e58b0d-aafa-4b3c-ba90-be0db225a246-proxy-ca-bundles\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.375634 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e58b0d-aafa-4b3c-ba90-be0db225a246-serving-cert\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.393127 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn2x9\" (UniqueName: \"kubernetes.io/projected/d4e58b0d-aafa-4b3c-ba90-be0db225a246-kube-api-access-vn2x9\") pod \"controller-manager-54847d48c6-bfdb4\" (UID: \"d4e58b0d-aafa-4b3c-ba90-be0db225a246\") " pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.527635 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.930620 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j64mb" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.930635 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j64mb" event={"ID":"f10908eb-32ed-4e49-b1ea-7b627343b29d","Type":"ContainerDied","Data":"dfa58333ccd22c2e8ea74de83ea0bc11b91667480a61d84ff739558a0ba0bb0b"} Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.930725 5024 scope.go:117] "RemoveContainer" containerID="576af8cb21cd7732d93f767a0937b3987e9d629196b3a5dca1628a39588d29a5" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.932641 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" event={"ID":"1eb80018-6d46-4b85-bb09-719f7f8848e5","Type":"ContainerDied","Data":"35e539edffd4befaf58ddbd82f10b42e2fa27c08291cd4ee6812953680aa1520"} Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.932760 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff9c64758-jx2vh" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.952932 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54847d48c6-bfdb4"] Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.956136 5024 scope.go:117] "RemoveContainer" containerID="f99c89d30d47ae0260479d7a88fc8826c8ac67cf3effa3b0137593b2afdfb678" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.975783 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j64mb"] Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.980872 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j64mb"] Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.985437 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7ff9c64758-jx2vh"] Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.986203 5024 scope.go:117] "RemoveContainer" containerID="5c7710a9b13e3a8575b38617de497d1605c0c70a9bd6b56c90990b4baa77750b" Nov 28 17:04:41 crc kubenswrapper[5024]: I1128 17:04:41.990888 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7ff9c64758-jx2vh"] Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.003257 5024 scope.go:117] "RemoveContainer" containerID="c5edb50b4eeb27f422303229c0f6fbd014be12154f17df46cbfff842ef1407bb" Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.384746 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gdgdt"] Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.386359 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gdgdt" podUID="542f05d2-a977-40de-887d-bc3538393234" containerName="registry-server" containerID="cri-o://92a2e9bfcfe8a39ffce7afabf7e9aa7d7d81f958ce43653d0c1ec8012b34f393" gracePeriod=2 Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.508692 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb80018-6d46-4b85-bb09-719f7f8848e5" path="/var/lib/kubelet/pods/1eb80018-6d46-4b85-bb09-719f7f8848e5/volumes" Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.509318 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" path="/var/lib/kubelet/pods/f10908eb-32ed-4e49-b1ea-7b627343b29d/volumes" Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.606783 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lqfjv"] Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.607113 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lqfjv" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" containerName="registry-server" containerID="cri-o://a4c736a350930343aac2858dfad7e47198a5732b12e77bd449bfbbbaf5de2f7f" gracePeriod=2 Nov 28 17:04:42 crc kubenswrapper[5024]: E1128 17:04:42.745924 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1587b87d_29af_4f60_a14f_d5e1dff6f5f2.slice/crio-a4c736a350930343aac2858dfad7e47198a5732b12e77bd449bfbbbaf5de2f7f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1587b87d_29af_4f60_a14f_d5e1dff6f5f2.slice/crio-conmon-a4c736a350930343aac2858dfad7e47198a5732b12e77bd449bfbbbaf5de2f7f.scope\": RecentStats: unable to find data in memory cache]" Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.956154 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.966766 5024 generic.go:334] "Generic (PLEG): container finished" podID="542f05d2-a977-40de-887d-bc3538393234" containerID="92a2e9bfcfe8a39ffce7afabf7e9aa7d7d81f958ce43653d0c1ec8012b34f393" exitCode=0 Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.966829 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gdgdt" event={"ID":"542f05d2-a977-40de-887d-bc3538393234","Type":"ContainerDied","Data":"92a2e9bfcfe8a39ffce7afabf7e9aa7d7d81f958ce43653d0c1ec8012b34f393"} Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.966866 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gdgdt" event={"ID":"542f05d2-a977-40de-887d-bc3538393234","Type":"ContainerDied","Data":"f72d3bb0a8135c5131e06a294577fb5031fb9fe14ed2b4b940c9813bfdb6cebd"} Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.966887 5024 scope.go:117] "RemoveContainer" containerID="92a2e9bfcfe8a39ffce7afabf7e9aa7d7d81f958ce43653d0c1ec8012b34f393" Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.966977 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gdgdt" Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.986713 5024 generic.go:334] "Generic (PLEG): container finished" podID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" containerID="a4c736a350930343aac2858dfad7e47198a5732b12e77bd449bfbbbaf5de2f7f" exitCode=0 Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.986828 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqfjv" event={"ID":"1587b87d-29af-4f60-a14f-d5e1dff6f5f2","Type":"ContainerDied","Data":"a4c736a350930343aac2858dfad7e47198a5732b12e77bd449bfbbbaf5de2f7f"} Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.988584 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" event={"ID":"d4e58b0d-aafa-4b3c-ba90-be0db225a246","Type":"ContainerStarted","Data":"49e3edb3099eeafb2556c0fbc25ceb0da22fdbb1c5691c35fe4628c28cc8d52c"} Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.988603 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" event={"ID":"d4e58b0d-aafa-4b3c-ba90-be0db225a246","Type":"ContainerStarted","Data":"56fe225561e017778c10d3c093ccabbb393016686bcf3d9d3f311aac6989081a"} Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.990226 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.996718 5024 scope.go:117] "RemoveContainer" containerID="12aa07233851b87dbf0bc559b438a71e5f26dfaf92b76d0703bbbe220083ef05" Nov 28 17:04:42 crc kubenswrapper[5024]: I1128 17:04:42.997576 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.025930 5024 scope.go:117] "RemoveContainer" containerID="1a353a52126d6925fe13cd4f5603a434cb5d2546a9c150bf334f20a99863ac86" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.050374 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54847d48c6-bfdb4" podStartSLOduration=4.050339282 podStartE2EDuration="4.050339282s" podCreationTimestamp="2025-11-28 17:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:04:43.01169357 +0000 UTC m=+385.060614485" watchObservedRunningTime="2025-11-28 17:04:43.050339282 +0000 UTC m=+385.099260207" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.072662 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.088597 5024 scope.go:117] "RemoveContainer" containerID="92a2e9bfcfe8a39ffce7afabf7e9aa7d7d81f958ce43653d0c1ec8012b34f393" Nov 28 17:04:43 crc kubenswrapper[5024]: E1128 17:04:43.091351 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92a2e9bfcfe8a39ffce7afabf7e9aa7d7d81f958ce43653d0c1ec8012b34f393\": container with ID starting with 92a2e9bfcfe8a39ffce7afabf7e9aa7d7d81f958ce43653d0c1ec8012b34f393 not found: ID does not exist" containerID="92a2e9bfcfe8a39ffce7afabf7e9aa7d7d81f958ce43653d0c1ec8012b34f393" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.091481 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92a2e9bfcfe8a39ffce7afabf7e9aa7d7d81f958ce43653d0c1ec8012b34f393"} err="failed to get container status \"92a2e9bfcfe8a39ffce7afabf7e9aa7d7d81f958ce43653d0c1ec8012b34f393\": rpc error: code = NotFound desc = could not find container \"92a2e9bfcfe8a39ffce7afabf7e9aa7d7d81f958ce43653d0c1ec8012b34f393\": container with ID starting with 92a2e9bfcfe8a39ffce7afabf7e9aa7d7d81f958ce43653d0c1ec8012b34f393 not found: ID does not exist" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.091541 5024 scope.go:117] "RemoveContainer" containerID="12aa07233851b87dbf0bc559b438a71e5f26dfaf92b76d0703bbbe220083ef05" Nov 28 17:04:43 crc kubenswrapper[5024]: E1128 17:04:43.092216 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12aa07233851b87dbf0bc559b438a71e5f26dfaf92b76d0703bbbe220083ef05\": container with ID starting with 12aa07233851b87dbf0bc559b438a71e5f26dfaf92b76d0703bbbe220083ef05 not found: ID does not exist" containerID="12aa07233851b87dbf0bc559b438a71e5f26dfaf92b76d0703bbbe220083ef05" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.092238 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12aa07233851b87dbf0bc559b438a71e5f26dfaf92b76d0703bbbe220083ef05"} err="failed to get container status \"12aa07233851b87dbf0bc559b438a71e5f26dfaf92b76d0703bbbe220083ef05\": rpc error: code = NotFound desc = could not find container \"12aa07233851b87dbf0bc559b438a71e5f26dfaf92b76d0703bbbe220083ef05\": container with ID starting with 12aa07233851b87dbf0bc559b438a71e5f26dfaf92b76d0703bbbe220083ef05 not found: ID does not exist" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.092267 5024 scope.go:117] "RemoveContainer" containerID="1a353a52126d6925fe13cd4f5603a434cb5d2546a9c150bf334f20a99863ac86" Nov 28 17:04:43 crc kubenswrapper[5024]: E1128 17:04:43.092535 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a353a52126d6925fe13cd4f5603a434cb5d2546a9c150bf334f20a99863ac86\": container with ID starting with 1a353a52126d6925fe13cd4f5603a434cb5d2546a9c150bf334f20a99863ac86 not found: ID does not exist" containerID="1a353a52126d6925fe13cd4f5603a434cb5d2546a9c150bf334f20a99863ac86" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.092560 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a353a52126d6925fe13cd4f5603a434cb5d2546a9c150bf334f20a99863ac86"} err="failed to get container status \"1a353a52126d6925fe13cd4f5603a434cb5d2546a9c150bf334f20a99863ac86\": rpc error: code = NotFound desc = could not find container \"1a353a52126d6925fe13cd4f5603a434cb5d2546a9c150bf334f20a99863ac86\": container with ID starting with 1a353a52126d6925fe13cd4f5603a434cb5d2546a9c150bf334f20a99863ac86 not found: ID does not exist" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.103012 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/542f05d2-a977-40de-887d-bc3538393234-catalog-content\") pod \"542f05d2-a977-40de-887d-bc3538393234\" (UID: \"542f05d2-a977-40de-887d-bc3538393234\") " Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.103100 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/542f05d2-a977-40de-887d-bc3538393234-utilities\") pod \"542f05d2-a977-40de-887d-bc3538393234\" (UID: \"542f05d2-a977-40de-887d-bc3538393234\") " Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.103195 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lnzn\" (UniqueName: \"kubernetes.io/projected/542f05d2-a977-40de-887d-bc3538393234-kube-api-access-8lnzn\") pod \"542f05d2-a977-40de-887d-bc3538393234\" (UID: \"542f05d2-a977-40de-887d-bc3538393234\") " Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.116216 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/542f05d2-a977-40de-887d-bc3538393234-utilities" (OuterVolumeSpecName: "utilities") pod "542f05d2-a977-40de-887d-bc3538393234" (UID: "542f05d2-a977-40de-887d-bc3538393234"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.117011 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/542f05d2-a977-40de-887d-bc3538393234-kube-api-access-8lnzn" (OuterVolumeSpecName: "kube-api-access-8lnzn") pod "542f05d2-a977-40de-887d-bc3538393234" (UID: "542f05d2-a977-40de-887d-bc3538393234"). InnerVolumeSpecName "kube-api-access-8lnzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.126229 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/542f05d2-a977-40de-887d-bc3538393234-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "542f05d2-a977-40de-887d-bc3538393234" (UID: "542f05d2-a977-40de-887d-bc3538393234"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.204375 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrz5q\" (UniqueName: \"kubernetes.io/projected/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-kube-api-access-rrz5q\") pod \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\" (UID: \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\") " Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.204640 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-catalog-content\") pod \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\" (UID: \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\") " Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.204749 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-utilities\") pod \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\" (UID: \"1587b87d-29af-4f60-a14f-d5e1dff6f5f2\") " Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.205147 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lnzn\" (UniqueName: \"kubernetes.io/projected/542f05d2-a977-40de-887d-bc3538393234-kube-api-access-8lnzn\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.205173 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/542f05d2-a977-40de-887d-bc3538393234-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.205205 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/542f05d2-a977-40de-887d-bc3538393234-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.205748 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-utilities" (OuterVolumeSpecName: "utilities") pod "1587b87d-29af-4f60-a14f-d5e1dff6f5f2" (UID: "1587b87d-29af-4f60-a14f-d5e1dff6f5f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.208679 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-kube-api-access-rrz5q" (OuterVolumeSpecName: "kube-api-access-rrz5q") pod "1587b87d-29af-4f60-a14f-d5e1dff6f5f2" (UID: "1587b87d-29af-4f60-a14f-d5e1dff6f5f2"). InnerVolumeSpecName "kube-api-access-rrz5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.307074 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.307123 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrz5q\" (UniqueName: \"kubernetes.io/projected/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-kube-api-access-rrz5q\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.307742 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gdgdt"] Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.313475 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gdgdt"] Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.344213 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1587b87d-29af-4f60-a14f-d5e1dff6f5f2" (UID: "1587b87d-29af-4f60-a14f-d5e1dff6f5f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.412514 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1587b87d-29af-4f60-a14f-d5e1dff6f5f2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.999299 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqfjv" event={"ID":"1587b87d-29af-4f60-a14f-d5e1dff6f5f2","Type":"ContainerDied","Data":"4bced54f3dd6b6c3d898d60dd4dd13d0d5216ecf6c15e33c639f9b2ed60feef8"} Nov 28 17:04:43 crc kubenswrapper[5024]: I1128 17:04:43.999370 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqfjv" Nov 28 17:04:44 crc kubenswrapper[5024]: I1128 17:04:43.999858 5024 scope.go:117] "RemoveContainer" containerID="a4c736a350930343aac2858dfad7e47198a5732b12e77bd449bfbbbaf5de2f7f" Nov 28 17:04:44 crc kubenswrapper[5024]: I1128 17:04:44.033811 5024 scope.go:117] "RemoveContainer" containerID="78ec764dcaeb663dcb4b75ef03dbd4be4617ca284c535318eb85d27750600480" Nov 28 17:04:44 crc kubenswrapper[5024]: I1128 17:04:44.073780 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lqfjv"] Nov 28 17:04:44 crc kubenswrapper[5024]: I1128 17:04:44.073946 5024 scope.go:117] "RemoveContainer" containerID="9c03dbf91b90eac91de49cd007a68a0467f48a17ecf571bec277eb276410aa3a" Nov 28 17:04:44 crc kubenswrapper[5024]: I1128 17:04:44.078342 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lqfjv"] Nov 28 17:04:44 crc kubenswrapper[5024]: I1128 17:04:44.509137 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" path="/var/lib/kubelet/pods/1587b87d-29af-4f60-a14f-d5e1dff6f5f2/volumes" Nov 28 17:04:44 crc kubenswrapper[5024]: I1128 17:04:44.510013 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="542f05d2-a977-40de-887d-bc3538393234" path="/var/lib/kubelet/pods/542f05d2-a977-40de-887d-bc3538393234/volumes" Nov 28 17:04:48 crc kubenswrapper[5024]: I1128 17:04:48.995613 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kx8x6"] Nov 28 17:04:48 crc kubenswrapper[5024]: I1128 17:04:48.997108 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kx8x6" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" containerName="registry-server" containerID="cri-o://5af1910d98817e8fed6c253f99f6ca6db9401f4c1fecf70a7085ba737134be18" gracePeriod=30 Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.033201 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rc8qm"] Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.033860 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rc8qm" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" containerName="registry-server" containerID="cri-o://b76052db5c5012cf089a1654370e3c881045b6bb58604a4f7013a262fbbef6bf" gracePeriod=30 Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.044419 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6p4ff"] Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.044719 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" podUID="80a843cd-6141-431e-83c1-a7ce0110e31f" containerName="marketplace-operator" containerID="cri-o://476661b4d061905781fdc8d667a57a3ff2d047d92a598bf1c6af70a17d190790" gracePeriod=30 Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.060480 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zl4ft"] Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.060821 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zl4ft" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" containerName="registry-server" containerID="cri-o://010d3c632ebf08931dce6fcc7db092a070e6a1fcdea794a7494e8db3be774af1" gracePeriod=30 Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.068482 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pnzzt"] Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.069975 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pnzzt" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" containerName="registry-server" containerID="cri-o://da692b71b387ae09c136f4836eaf2817520448b1bef8f0756610c73541112127" gracePeriod=30 Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.072037 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vnd7q"] Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.072341 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="542f05d2-a977-40de-887d-bc3538393234" containerName="extract-content" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.072360 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="542f05d2-a977-40de-887d-bc3538393234" containerName="extract-content" Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.072386 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" containerName="registry-server" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.072415 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" containerName="registry-server" Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.072424 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" containerName="extract-utilities" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.072431 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" containerName="extract-utilities" Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.072440 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" containerName="extract-utilities" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.072446 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" containerName="extract-utilities" Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.072456 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="542f05d2-a977-40de-887d-bc3538393234" containerName="registry-server" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.072464 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="542f05d2-a977-40de-887d-bc3538393234" containerName="registry-server" Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.072474 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="542f05d2-a977-40de-887d-bc3538393234" containerName="extract-utilities" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.072480 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="542f05d2-a977-40de-887d-bc3538393234" containerName="extract-utilities" Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.072494 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" containerName="extract-content" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.072502 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" containerName="extract-content" Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.072511 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" containerName="registry-server" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.072516 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" containerName="registry-server" Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.072527 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" containerName="extract-content" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.072535 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" containerName="extract-content" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.072627 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="1587b87d-29af-4f60-a14f-d5e1dff6f5f2" containerName="registry-server" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.072637 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10908eb-32ed-4e49-b1ea-7b627343b29d" containerName="registry-server" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.072651 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="542f05d2-a977-40de-887d-bc3538393234" containerName="registry-server" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.073225 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.081099 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vnd7q"] Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.148576 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02049d91-d768-4285-8a95-b88d379bee70-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vnd7q\" (UID: \"02049d91-d768-4285-8a95-b88d379bee70\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.148671 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/02049d91-d768-4285-8a95-b88d379bee70-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vnd7q\" (UID: \"02049d91-d768-4285-8a95-b88d379bee70\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.148736 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfhns\" (UniqueName: \"kubernetes.io/projected/02049d91-d768-4285-8a95-b88d379bee70-kube-api-access-wfhns\") pod \"marketplace-operator-79b997595-vnd7q\" (UID: \"02049d91-d768-4285-8a95-b88d379bee70\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.250068 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02049d91-d768-4285-8a95-b88d379bee70-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vnd7q\" (UID: \"02049d91-d768-4285-8a95-b88d379bee70\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.250148 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/02049d91-d768-4285-8a95-b88d379bee70-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vnd7q\" (UID: \"02049d91-d768-4285-8a95-b88d379bee70\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.250191 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfhns\" (UniqueName: \"kubernetes.io/projected/02049d91-d768-4285-8a95-b88d379bee70-kube-api-access-wfhns\") pod \"marketplace-operator-79b997595-vnd7q\" (UID: \"02049d91-d768-4285-8a95-b88d379bee70\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.251503 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02049d91-d768-4285-8a95-b88d379bee70-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vnd7q\" (UID: \"02049d91-d768-4285-8a95-b88d379bee70\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.257621 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/02049d91-d768-4285-8a95-b88d379bee70-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vnd7q\" (UID: \"02049d91-d768-4285-8a95-b88d379bee70\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.271388 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfhns\" (UniqueName: \"kubernetes.io/projected/02049d91-d768-4285-8a95-b88d379bee70-kube-api-access-wfhns\") pod \"marketplace-operator-79b997595-vnd7q\" (UID: \"02049d91-d768-4285-8a95-b88d379bee70\") " pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.405509 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.630216 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.658222 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s84v4\" (UniqueName: \"kubernetes.io/projected/81188cf2-b85a-46bb-baf2-cda9e211eda7-kube-api-access-s84v4\") pod \"81188cf2-b85a-46bb-baf2-cda9e211eda7\" (UID: \"81188cf2-b85a-46bb-baf2-cda9e211eda7\") " Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.658402 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81188cf2-b85a-46bb-baf2-cda9e211eda7-utilities\") pod \"81188cf2-b85a-46bb-baf2-cda9e211eda7\" (UID: \"81188cf2-b85a-46bb-baf2-cda9e211eda7\") " Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.660251 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81188cf2-b85a-46bb-baf2-cda9e211eda7-catalog-content\") pod \"81188cf2-b85a-46bb-baf2-cda9e211eda7\" (UID: \"81188cf2-b85a-46bb-baf2-cda9e211eda7\") " Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.671041 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81188cf2-b85a-46bb-baf2-cda9e211eda7-utilities" (OuterVolumeSpecName: "utilities") pod "81188cf2-b85a-46bb-baf2-cda9e211eda7" (UID: "81188cf2-b85a-46bb-baf2-cda9e211eda7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.673256 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81188cf2-b85a-46bb-baf2-cda9e211eda7-kube-api-access-s84v4" (OuterVolumeSpecName: "kube-api-access-s84v4") pod "81188cf2-b85a-46bb-baf2-cda9e211eda7" (UID: "81188cf2-b85a-46bb-baf2-cda9e211eda7"). InnerVolumeSpecName "kube-api-access-s84v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.688949 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81188cf2-b85a-46bb-baf2-cda9e211eda7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81188cf2-b85a-46bb-baf2-cda9e211eda7" (UID: "81188cf2-b85a-46bb-baf2-cda9e211eda7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.740068 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5af1910d98817e8fed6c253f99f6ca6db9401f4c1fecf70a7085ba737134be18 is running failed: container process not found" containerID="5af1910d98817e8fed6c253f99f6ca6db9401f4c1fecf70a7085ba737134be18" cmd=["grpc_health_probe","-addr=:50051"] Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.740570 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5af1910d98817e8fed6c253f99f6ca6db9401f4c1fecf70a7085ba737134be18 is running failed: container process not found" containerID="5af1910d98817e8fed6c253f99f6ca6db9401f4c1fecf70a7085ba737134be18" cmd=["grpc_health_probe","-addr=:50051"] Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.741384 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5af1910d98817e8fed6c253f99f6ca6db9401f4c1fecf70a7085ba737134be18 is running failed: container process not found" containerID="5af1910d98817e8fed6c253f99f6ca6db9401f4c1fecf70a7085ba737134be18" cmd=["grpc_health_probe","-addr=:50051"] Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.741426 5024 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5af1910d98817e8fed6c253f99f6ca6db9401f4c1fecf70a7085ba737134be18 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-kx8x6" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" containerName="registry-server" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.762013 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s84v4\" (UniqueName: \"kubernetes.io/projected/81188cf2-b85a-46bb-baf2-cda9e211eda7-kube-api-access-s84v4\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.762074 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81188cf2-b85a-46bb-baf2-cda9e211eda7-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.762089 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81188cf2-b85a-46bb-baf2-cda9e211eda7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.810987 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.863407 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j58b\" (UniqueName: \"kubernetes.io/projected/610e20bb-07aa-46c2-9f83-1711f9133ad0-kube-api-access-5j58b\") pod \"610e20bb-07aa-46c2-9f83-1711f9133ad0\" (UID: \"610e20bb-07aa-46c2-9f83-1711f9133ad0\") " Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.863487 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/610e20bb-07aa-46c2-9f83-1711f9133ad0-utilities\") pod \"610e20bb-07aa-46c2-9f83-1711f9133ad0\" (UID: \"610e20bb-07aa-46c2-9f83-1711f9133ad0\") " Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.863547 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/610e20bb-07aa-46c2-9f83-1711f9133ad0-catalog-content\") pod \"610e20bb-07aa-46c2-9f83-1711f9133ad0\" (UID: \"610e20bb-07aa-46c2-9f83-1711f9133ad0\") " Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.864198 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/610e20bb-07aa-46c2-9f83-1711f9133ad0-utilities" (OuterVolumeSpecName: "utilities") pod "610e20bb-07aa-46c2-9f83-1711f9133ad0" (UID: "610e20bb-07aa-46c2-9f83-1711f9133ad0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.867034 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/610e20bb-07aa-46c2-9f83-1711f9133ad0-kube-api-access-5j58b" (OuterVolumeSpecName: "kube-api-access-5j58b") pod "610e20bb-07aa-46c2-9f83-1711f9133ad0" (UID: "610e20bb-07aa-46c2-9f83-1711f9133ad0"). InnerVolumeSpecName "kube-api-access-5j58b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.945431 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b76052db5c5012cf089a1654370e3c881045b6bb58604a4f7013a262fbbef6bf is running failed: container process not found" containerID="b76052db5c5012cf089a1654370e3c881045b6bb58604a4f7013a262fbbef6bf" cmd=["grpc_health_probe","-addr=:50051"] Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.945788 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b76052db5c5012cf089a1654370e3c881045b6bb58604a4f7013a262fbbef6bf is running failed: container process not found" containerID="b76052db5c5012cf089a1654370e3c881045b6bb58604a4f7013a262fbbef6bf" cmd=["grpc_health_probe","-addr=:50051"] Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.946095 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b76052db5c5012cf089a1654370e3c881045b6bb58604a4f7013a262fbbef6bf is running failed: container process not found" containerID="b76052db5c5012cf089a1654370e3c881045b6bb58604a4f7013a262fbbef6bf" cmd=["grpc_health_probe","-addr=:50051"] Nov 28 17:04:49 crc kubenswrapper[5024]: E1128 17:04:49.946127 5024 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b76052db5c5012cf089a1654370e3c881045b6bb58604a4f7013a262fbbef6bf is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-rc8qm" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" containerName="registry-server" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.964805 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j58b\" (UniqueName: \"kubernetes.io/projected/610e20bb-07aa-46c2-9f83-1711f9133ad0-kube-api-access-5j58b\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.964843 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/610e20bb-07aa-46c2-9f83-1711f9133ad0-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:49 crc kubenswrapper[5024]: I1128 17:04:49.992061 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/610e20bb-07aa-46c2-9f83-1711f9133ad0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "610e20bb-07aa-46c2-9f83-1711f9133ad0" (UID: "610e20bb-07aa-46c2-9f83-1711f9133ad0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.067690 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/610e20bb-07aa-46c2-9f83-1711f9133ad0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.073423 5024 generic.go:334] "Generic (PLEG): container finished" podID="2a0db523-f690-4c23-8324-b417a8ccd4b2" containerID="5af1910d98817e8fed6c253f99f6ca6db9401f4c1fecf70a7085ba737134be18" exitCode=0 Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.073539 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kx8x6" event={"ID":"2a0db523-f690-4c23-8324-b417a8ccd4b2","Type":"ContainerDied","Data":"5af1910d98817e8fed6c253f99f6ca6db9401f4c1fecf70a7085ba737134be18"} Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.143629 5024 generic.go:334] "Generic (PLEG): container finished" podID="81188cf2-b85a-46bb-baf2-cda9e211eda7" containerID="010d3c632ebf08931dce6fcc7db092a070e6a1fcdea794a7494e8db3be774af1" exitCode=0 Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.143742 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zl4ft" event={"ID":"81188cf2-b85a-46bb-baf2-cda9e211eda7","Type":"ContainerDied","Data":"010d3c632ebf08931dce6fcc7db092a070e6a1fcdea794a7494e8db3be774af1"} Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.143776 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zl4ft" event={"ID":"81188cf2-b85a-46bb-baf2-cda9e211eda7","Type":"ContainerDied","Data":"30f8a80048b44a1cee48a71d91e02c1465004595417932fd1191a9a2ceaaeefe"} Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.143797 5024 scope.go:117] "RemoveContainer" containerID="010d3c632ebf08931dce6fcc7db092a070e6a1fcdea794a7494e8db3be774af1" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.143958 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zl4ft" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.150471 5024 generic.go:334] "Generic (PLEG): container finished" podID="610e20bb-07aa-46c2-9f83-1711f9133ad0" containerID="da692b71b387ae09c136f4836eaf2817520448b1bef8f0756610c73541112127" exitCode=0 Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.150554 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnzzt" event={"ID":"610e20bb-07aa-46c2-9f83-1711f9133ad0","Type":"ContainerDied","Data":"da692b71b387ae09c136f4836eaf2817520448b1bef8f0756610c73541112127"} Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.150588 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnzzt" event={"ID":"610e20bb-07aa-46c2-9f83-1711f9133ad0","Type":"ContainerDied","Data":"d2f3f28c214d5b081e933cc23c3e66fca212d759f26b2343f4bb1e3d20dd2b25"} Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.150668 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pnzzt" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.155906 5024 generic.go:334] "Generic (PLEG): container finished" podID="80a843cd-6141-431e-83c1-a7ce0110e31f" containerID="476661b4d061905781fdc8d667a57a3ff2d047d92a598bf1c6af70a17d190790" exitCode=0 Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.156001 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" event={"ID":"80a843cd-6141-431e-83c1-a7ce0110e31f","Type":"ContainerDied","Data":"476661b4d061905781fdc8d667a57a3ff2d047d92a598bf1c6af70a17d190790"} Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.165635 5024 generic.go:334] "Generic (PLEG): container finished" podID="8fae0fa8-8183-4e44-afed-63a655dd82c5" containerID="b76052db5c5012cf089a1654370e3c881045b6bb58604a4f7013a262fbbef6bf" exitCode=0 Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.165690 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc8qm" event={"ID":"8fae0fa8-8183-4e44-afed-63a655dd82c5","Type":"ContainerDied","Data":"b76052db5c5012cf089a1654370e3c881045b6bb58604a4f7013a262fbbef6bf"} Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.188952 5024 scope.go:117] "RemoveContainer" containerID="986a0dde13359c340669624848d2074d35952a29feb574410e5db6055609cad0" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.206537 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vnd7q"] Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.217728 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zl4ft"] Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.221116 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zl4ft"] Nov 28 17:04:50 crc kubenswrapper[5024]: W1128 17:04:50.238227 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02049d91_d768_4285_8a95_b88d379bee70.slice/crio-91977f3d70dda97f5152d5ce151e10cbdea01a9cabfa46a860f1c59a3814853b WatchSource:0}: Error finding container 91977f3d70dda97f5152d5ce151e10cbdea01a9cabfa46a860f1c59a3814853b: Status 404 returned error can't find the container with id 91977f3d70dda97f5152d5ce151e10cbdea01a9cabfa46a860f1c59a3814853b Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.245101 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pnzzt"] Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.250513 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pnzzt"] Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.305120 5024 scope.go:117] "RemoveContainer" containerID="772ee4011d88ab3d6b37bc7ec062ab7c8b5ce2215a5b65c32d6ac92abc75d662" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.332598 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.343058 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.343671 5024 scope.go:117] "RemoveContainer" containerID="010d3c632ebf08931dce6fcc7db092a070e6a1fcdea794a7494e8db3be774af1" Nov 28 17:04:50 crc kubenswrapper[5024]: E1128 17:04:50.344007 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"010d3c632ebf08931dce6fcc7db092a070e6a1fcdea794a7494e8db3be774af1\": container with ID starting with 010d3c632ebf08931dce6fcc7db092a070e6a1fcdea794a7494e8db3be774af1 not found: ID does not exist" containerID="010d3c632ebf08931dce6fcc7db092a070e6a1fcdea794a7494e8db3be774af1" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.344126 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010d3c632ebf08931dce6fcc7db092a070e6a1fcdea794a7494e8db3be774af1"} err="failed to get container status \"010d3c632ebf08931dce6fcc7db092a070e6a1fcdea794a7494e8db3be774af1\": rpc error: code = NotFound desc = could not find container \"010d3c632ebf08931dce6fcc7db092a070e6a1fcdea794a7494e8db3be774af1\": container with ID starting with 010d3c632ebf08931dce6fcc7db092a070e6a1fcdea794a7494e8db3be774af1 not found: ID does not exist" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.344149 5024 scope.go:117] "RemoveContainer" containerID="986a0dde13359c340669624848d2074d35952a29feb574410e5db6055609cad0" Nov 28 17:04:50 crc kubenswrapper[5024]: E1128 17:04:50.344553 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"986a0dde13359c340669624848d2074d35952a29feb574410e5db6055609cad0\": container with ID starting with 986a0dde13359c340669624848d2074d35952a29feb574410e5db6055609cad0 not found: ID does not exist" containerID="986a0dde13359c340669624848d2074d35952a29feb574410e5db6055609cad0" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.344594 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"986a0dde13359c340669624848d2074d35952a29feb574410e5db6055609cad0"} err="failed to get container status \"986a0dde13359c340669624848d2074d35952a29feb574410e5db6055609cad0\": rpc error: code = NotFound desc = could not find container \"986a0dde13359c340669624848d2074d35952a29feb574410e5db6055609cad0\": container with ID starting with 986a0dde13359c340669624848d2074d35952a29feb574410e5db6055609cad0 not found: ID does not exist" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.344633 5024 scope.go:117] "RemoveContainer" containerID="772ee4011d88ab3d6b37bc7ec062ab7c8b5ce2215a5b65c32d6ac92abc75d662" Nov 28 17:04:50 crc kubenswrapper[5024]: E1128 17:04:50.344939 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"772ee4011d88ab3d6b37bc7ec062ab7c8b5ce2215a5b65c32d6ac92abc75d662\": container with ID starting with 772ee4011d88ab3d6b37bc7ec062ab7c8b5ce2215a5b65c32d6ac92abc75d662 not found: ID does not exist" containerID="772ee4011d88ab3d6b37bc7ec062ab7c8b5ce2215a5b65c32d6ac92abc75d662" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.344983 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"772ee4011d88ab3d6b37bc7ec062ab7c8b5ce2215a5b65c32d6ac92abc75d662"} err="failed to get container status \"772ee4011d88ab3d6b37bc7ec062ab7c8b5ce2215a5b65c32d6ac92abc75d662\": rpc error: code = NotFound desc = could not find container \"772ee4011d88ab3d6b37bc7ec062ab7c8b5ce2215a5b65c32d6ac92abc75d662\": container with ID starting with 772ee4011d88ab3d6b37bc7ec062ab7c8b5ce2215a5b65c32d6ac92abc75d662 not found: ID does not exist" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.345010 5024 scope.go:117] "RemoveContainer" containerID="da692b71b387ae09c136f4836eaf2817520448b1bef8f0756610c73541112127" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.371552 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80a843cd-6141-431e-83c1-a7ce0110e31f-marketplace-trusted-ca\") pod \"80a843cd-6141-431e-83c1-a7ce0110e31f\" (UID: \"80a843cd-6141-431e-83c1-a7ce0110e31f\") " Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.371913 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9xc5\" (UniqueName: \"kubernetes.io/projected/80a843cd-6141-431e-83c1-a7ce0110e31f-kube-api-access-v9xc5\") pod \"80a843cd-6141-431e-83c1-a7ce0110e31f\" (UID: \"80a843cd-6141-431e-83c1-a7ce0110e31f\") " Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.371979 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/80a843cd-6141-431e-83c1-a7ce0110e31f-marketplace-operator-metrics\") pod \"80a843cd-6141-431e-83c1-a7ce0110e31f\" (UID: \"80a843cd-6141-431e-83c1-a7ce0110e31f\") " Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.372064 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0db523-f690-4c23-8324-b417a8ccd4b2-utilities\") pod \"2a0db523-f690-4c23-8324-b417a8ccd4b2\" (UID: \"2a0db523-f690-4c23-8324-b417a8ccd4b2\") " Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.372136 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0db523-f690-4c23-8324-b417a8ccd4b2-catalog-content\") pod \"2a0db523-f690-4c23-8324-b417a8ccd4b2\" (UID: \"2a0db523-f690-4c23-8324-b417a8ccd4b2\") " Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.372231 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzhws\" (UniqueName: \"kubernetes.io/projected/2a0db523-f690-4c23-8324-b417a8ccd4b2-kube-api-access-fzhws\") pod \"2a0db523-f690-4c23-8324-b417a8ccd4b2\" (UID: \"2a0db523-f690-4c23-8324-b417a8ccd4b2\") " Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.373966 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80a843cd-6141-431e-83c1-a7ce0110e31f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "80a843cd-6141-431e-83c1-a7ce0110e31f" (UID: "80a843cd-6141-431e-83c1-a7ce0110e31f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.380753 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80a843cd-6141-431e-83c1-a7ce0110e31f-kube-api-access-v9xc5" (OuterVolumeSpecName: "kube-api-access-v9xc5") pod "80a843cd-6141-431e-83c1-a7ce0110e31f" (UID: "80a843cd-6141-431e-83c1-a7ce0110e31f"). InnerVolumeSpecName "kube-api-access-v9xc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.381782 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a0db523-f690-4c23-8324-b417a8ccd4b2-utilities" (OuterVolumeSpecName: "utilities") pod "2a0db523-f690-4c23-8324-b417a8ccd4b2" (UID: "2a0db523-f690-4c23-8324-b417a8ccd4b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.382920 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.401824 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80a843cd-6141-431e-83c1-a7ce0110e31f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "80a843cd-6141-431e-83c1-a7ce0110e31f" (UID: "80a843cd-6141-431e-83c1-a7ce0110e31f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.406854 5024 scope.go:117] "RemoveContainer" containerID="8c5a874bf5e6b493a652c8852e1c28eed91009d4dd659ad89ede384139fa110b" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.409582 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a0db523-f690-4c23-8324-b417a8ccd4b2-kube-api-access-fzhws" (OuterVolumeSpecName: "kube-api-access-fzhws") pod "2a0db523-f690-4c23-8324-b417a8ccd4b2" (UID: "2a0db523-f690-4c23-8324-b417a8ccd4b2"). InnerVolumeSpecName "kube-api-access-fzhws". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.448741 5024 scope.go:117] "RemoveContainer" containerID="281a5f1d4c03eae62a05bd1c36fe16b4413b3e7ed6f62f0ed3bca9859e6c7a06" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.452756 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a0db523-f690-4c23-8324-b417a8ccd4b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a0db523-f690-4c23-8324-b417a8ccd4b2" (UID: "2a0db523-f690-4c23-8324-b417a8ccd4b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.463904 5024 scope.go:117] "RemoveContainer" containerID="da692b71b387ae09c136f4836eaf2817520448b1bef8f0756610c73541112127" Nov 28 17:04:50 crc kubenswrapper[5024]: E1128 17:04:50.464769 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da692b71b387ae09c136f4836eaf2817520448b1bef8f0756610c73541112127\": container with ID starting with da692b71b387ae09c136f4836eaf2817520448b1bef8f0756610c73541112127 not found: ID does not exist" containerID="da692b71b387ae09c136f4836eaf2817520448b1bef8f0756610c73541112127" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.464834 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da692b71b387ae09c136f4836eaf2817520448b1bef8f0756610c73541112127"} err="failed to get container status \"da692b71b387ae09c136f4836eaf2817520448b1bef8f0756610c73541112127\": rpc error: code = NotFound desc = could not find container \"da692b71b387ae09c136f4836eaf2817520448b1bef8f0756610c73541112127\": container with ID starting with da692b71b387ae09c136f4836eaf2817520448b1bef8f0756610c73541112127 not found: ID does not exist" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.464873 5024 scope.go:117] "RemoveContainer" containerID="8c5a874bf5e6b493a652c8852e1c28eed91009d4dd659ad89ede384139fa110b" Nov 28 17:04:50 crc kubenswrapper[5024]: E1128 17:04:50.465259 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c5a874bf5e6b493a652c8852e1c28eed91009d4dd659ad89ede384139fa110b\": container with ID starting with 8c5a874bf5e6b493a652c8852e1c28eed91009d4dd659ad89ede384139fa110b not found: ID does not exist" containerID="8c5a874bf5e6b493a652c8852e1c28eed91009d4dd659ad89ede384139fa110b" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.465301 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c5a874bf5e6b493a652c8852e1c28eed91009d4dd659ad89ede384139fa110b"} err="failed to get container status \"8c5a874bf5e6b493a652c8852e1c28eed91009d4dd659ad89ede384139fa110b\": rpc error: code = NotFound desc = could not find container \"8c5a874bf5e6b493a652c8852e1c28eed91009d4dd659ad89ede384139fa110b\": container with ID starting with 8c5a874bf5e6b493a652c8852e1c28eed91009d4dd659ad89ede384139fa110b not found: ID does not exist" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.465331 5024 scope.go:117] "RemoveContainer" containerID="281a5f1d4c03eae62a05bd1c36fe16b4413b3e7ed6f62f0ed3bca9859e6c7a06" Nov 28 17:04:50 crc kubenswrapper[5024]: E1128 17:04:50.465593 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"281a5f1d4c03eae62a05bd1c36fe16b4413b3e7ed6f62f0ed3bca9859e6c7a06\": container with ID starting with 281a5f1d4c03eae62a05bd1c36fe16b4413b3e7ed6f62f0ed3bca9859e6c7a06 not found: ID does not exist" containerID="281a5f1d4c03eae62a05bd1c36fe16b4413b3e7ed6f62f0ed3bca9859e6c7a06" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.465625 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"281a5f1d4c03eae62a05bd1c36fe16b4413b3e7ed6f62f0ed3bca9859e6c7a06"} err="failed to get container status \"281a5f1d4c03eae62a05bd1c36fe16b4413b3e7ed6f62f0ed3bca9859e6c7a06\": rpc error: code = NotFound desc = could not find container \"281a5f1d4c03eae62a05bd1c36fe16b4413b3e7ed6f62f0ed3bca9859e6c7a06\": container with ID starting with 281a5f1d4c03eae62a05bd1c36fe16b4413b3e7ed6f62f0ed3bca9859e6c7a06 not found: ID does not exist" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.465642 5024 scope.go:117] "RemoveContainer" containerID="213be41ff4da95b7cc71ec5360caf9eb6ff2895cf36d82f7601157b4f203b416" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.474038 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52d7w\" (UniqueName: \"kubernetes.io/projected/8fae0fa8-8183-4e44-afed-63a655dd82c5-kube-api-access-52d7w\") pod \"8fae0fa8-8183-4e44-afed-63a655dd82c5\" (UID: \"8fae0fa8-8183-4e44-afed-63a655dd82c5\") " Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.474234 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fae0fa8-8183-4e44-afed-63a655dd82c5-utilities\") pod \"8fae0fa8-8183-4e44-afed-63a655dd82c5\" (UID: \"8fae0fa8-8183-4e44-afed-63a655dd82c5\") " Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.474298 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fae0fa8-8183-4e44-afed-63a655dd82c5-catalog-content\") pod \"8fae0fa8-8183-4e44-afed-63a655dd82c5\" (UID: \"8fae0fa8-8183-4e44-afed-63a655dd82c5\") " Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.475074 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fae0fa8-8183-4e44-afed-63a655dd82c5-utilities" (OuterVolumeSpecName: "utilities") pod "8fae0fa8-8183-4e44-afed-63a655dd82c5" (UID: "8fae0fa8-8183-4e44-afed-63a655dd82c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.478626 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fae0fa8-8183-4e44-afed-63a655dd82c5-kube-api-access-52d7w" (OuterVolumeSpecName: "kube-api-access-52d7w") pod "8fae0fa8-8183-4e44-afed-63a655dd82c5" (UID: "8fae0fa8-8183-4e44-afed-63a655dd82c5"). InnerVolumeSpecName "kube-api-access-52d7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.482929 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9xc5\" (UniqueName: \"kubernetes.io/projected/80a843cd-6141-431e-83c1-a7ce0110e31f-kube-api-access-v9xc5\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.482992 5024 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/80a843cd-6141-431e-83c1-a7ce0110e31f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.483011 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0db523-f690-4c23-8324-b417a8ccd4b2-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.483052 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0db523-f690-4c23-8324-b417a8ccd4b2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.483066 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52d7w\" (UniqueName: \"kubernetes.io/projected/8fae0fa8-8183-4e44-afed-63a655dd82c5-kube-api-access-52d7w\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.483080 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzhws\" (UniqueName: \"kubernetes.io/projected/2a0db523-f690-4c23-8324-b417a8ccd4b2-kube-api-access-fzhws\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.483096 5024 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/80a843cd-6141-431e-83c1-a7ce0110e31f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.483108 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fae0fa8-8183-4e44-afed-63a655dd82c5-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.507679 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" path="/var/lib/kubelet/pods/610e20bb-07aa-46c2-9f83-1711f9133ad0/volumes" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.508678 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" path="/var/lib/kubelet/pods/81188cf2-b85a-46bb-baf2-cda9e211eda7/volumes" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.540252 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fae0fa8-8183-4e44-afed-63a655dd82c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fae0fa8-8183-4e44-afed-63a655dd82c5" (UID: "8fae0fa8-8183-4e44-afed-63a655dd82c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:04:50 crc kubenswrapper[5024]: I1128 17:04:50.584469 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fae0fa8-8183-4e44-afed-63a655dd82c5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.182722 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" event={"ID":"80a843cd-6141-431e-83c1-a7ce0110e31f","Type":"ContainerDied","Data":"ae81daa2c7c1fbfa0f7b6dbb689378384aae840a496609b93eac095058c05013"} Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.183129 5024 scope.go:117] "RemoveContainer" containerID="476661b4d061905781fdc8d667a57a3ff2d047d92a598bf1c6af70a17d190790" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.183031 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6p4ff" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.187131 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc8qm" event={"ID":"8fae0fa8-8183-4e44-afed-63a655dd82c5","Type":"ContainerDied","Data":"4136ba2fb5cf112764d83b79cf05e66f112861703f3e18839888fb3c480e9e71"} Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.187258 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rc8qm" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.189510 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" event={"ID":"02049d91-d768-4285-8a95-b88d379bee70","Type":"ContainerStarted","Data":"3f72a49007ff1a676ab78e48a5c454ae5f13a36d5c5b60b35c7f2eaeb189d622"} Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.189665 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" event={"ID":"02049d91-d768-4285-8a95-b88d379bee70","Type":"ContainerStarted","Data":"91977f3d70dda97f5152d5ce151e10cbdea01a9cabfa46a860f1c59a3814853b"} Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.189821 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.191926 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kx8x6" event={"ID":"2a0db523-f690-4c23-8324-b417a8ccd4b2","Type":"ContainerDied","Data":"680dd644bf1cd91ee773fc214e508c02ac7e124dbdaf37b54ac6000094b3ce48"} Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.191940 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kx8x6" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.201744 5024 scope.go:117] "RemoveContainer" containerID="b76052db5c5012cf089a1654370e3c881045b6bb58604a4f7013a262fbbef6bf" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.203727 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.222886 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6p4ff"] Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.250845 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6p4ff"] Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.258144 5024 scope.go:117] "RemoveContainer" containerID="2271706b2324792f8ab3fcbb64ab5757d5df325ae50cef2460cb667373cdb2bf" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.272400 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kx8x6"] Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.282450 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vnd7q" podStartSLOduration=2.282426498 podStartE2EDuration="2.282426498s" podCreationTimestamp="2025-11-28 17:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:04:51.254127973 +0000 UTC m=+393.303048878" watchObservedRunningTime="2025-11-28 17:04:51.282426498 +0000 UTC m=+393.331347403" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.286111 5024 scope.go:117] "RemoveContainer" containerID="1c6c2081769d4df2058cc74ea0fb949d0c6bc9f92ae1981b8856303bc27a338a" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.286277 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kx8x6"] Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.291621 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rc8qm"] Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.294252 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rc8qm"] Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.313235 5024 scope.go:117] "RemoveContainer" containerID="5af1910d98817e8fed6c253f99f6ca6db9401f4c1fecf70a7085ba737134be18" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.336646 5024 scope.go:117] "RemoveContainer" containerID="d70d80e64e2e18a34726389e29c66130c41a076b0ee21e580d4a56e26ca252a8" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.354964 5024 scope.go:117] "RemoveContainer" containerID="45ed1b8d7583e4a799482dc6d4592468658cab8815404474a6558d7dfb6ab016" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.415537 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cl4bm"] Nov 28 17:04:51 crc kubenswrapper[5024]: E1128 17:04:51.417245 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" containerName="extract-content" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419511 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" containerName="extract-content" Nov 28 17:04:51 crc kubenswrapper[5024]: E1128 17:04:51.419546 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" containerName="extract-utilities" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419556 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" containerName="extract-utilities" Nov 28 17:04:51 crc kubenswrapper[5024]: E1128 17:04:51.419566 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" containerName="registry-server" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419576 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" containerName="registry-server" Nov 28 17:04:51 crc kubenswrapper[5024]: E1128 17:04:51.419589 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" containerName="registry-server" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419598 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" containerName="registry-server" Nov 28 17:04:51 crc kubenswrapper[5024]: E1128 17:04:51.419605 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" containerName="extract-content" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419612 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" containerName="extract-content" Nov 28 17:04:51 crc kubenswrapper[5024]: E1128 17:04:51.419625 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" containerName="extract-utilities" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419632 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" containerName="extract-utilities" Nov 28 17:04:51 crc kubenswrapper[5024]: E1128 17:04:51.419640 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a843cd-6141-431e-83c1-a7ce0110e31f" containerName="marketplace-operator" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419648 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a843cd-6141-431e-83c1-a7ce0110e31f" containerName="marketplace-operator" Nov 28 17:04:51 crc kubenswrapper[5024]: E1128 17:04:51.419657 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" containerName="registry-server" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419664 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" containerName="registry-server" Nov 28 17:04:51 crc kubenswrapper[5024]: E1128 17:04:51.419677 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" containerName="registry-server" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419685 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" containerName="registry-server" Nov 28 17:04:51 crc kubenswrapper[5024]: E1128 17:04:51.419699 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" containerName="extract-utilities" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419707 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" containerName="extract-utilities" Nov 28 17:04:51 crc kubenswrapper[5024]: E1128 17:04:51.419714 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" containerName="extract-content" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419721 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" containerName="extract-content" Nov 28 17:04:51 crc kubenswrapper[5024]: E1128 17:04:51.419732 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" containerName="extract-content" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419741 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" containerName="extract-content" Nov 28 17:04:51 crc kubenswrapper[5024]: E1128 17:04:51.419752 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" containerName="extract-utilities" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419760 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" containerName="extract-utilities" Nov 28 17:04:51 crc kubenswrapper[5024]: E1128 17:04:51.419775 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a843cd-6141-431e-83c1-a7ce0110e31f" containerName="marketplace-operator" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419783 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a843cd-6141-431e-83c1-a7ce0110e31f" containerName="marketplace-operator" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419976 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" containerName="registry-server" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.419991 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a843cd-6141-431e-83c1-a7ce0110e31f" containerName="marketplace-operator" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.420000 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="81188cf2-b85a-46bb-baf2-cda9e211eda7" containerName="registry-server" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.420013 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="610e20bb-07aa-46c2-9f83-1711f9133ad0" containerName="registry-server" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.420042 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" containerName="registry-server" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.420051 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a843cd-6141-431e-83c1-a7ce0110e31f" containerName="marketplace-operator" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.422206 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.425738 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.428511 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cl4bm"] Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.498847 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38ed0b11-7e2e-4592-9ffc-9851bc16e811-catalog-content\") pod \"redhat-marketplace-cl4bm\" (UID: \"38ed0b11-7e2e-4592-9ffc-9851bc16e811\") " pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.498917 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38ed0b11-7e2e-4592-9ffc-9851bc16e811-utilities\") pod \"redhat-marketplace-cl4bm\" (UID: \"38ed0b11-7e2e-4592-9ffc-9851bc16e811\") " pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.499654 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9grp\" (UniqueName: \"kubernetes.io/projected/38ed0b11-7e2e-4592-9ffc-9851bc16e811-kube-api-access-x9grp\") pod \"redhat-marketplace-cl4bm\" (UID: \"38ed0b11-7e2e-4592-9ffc-9851bc16e811\") " pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.601409 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38ed0b11-7e2e-4592-9ffc-9851bc16e811-catalog-content\") pod \"redhat-marketplace-cl4bm\" (UID: \"38ed0b11-7e2e-4592-9ffc-9851bc16e811\") " pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.601520 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38ed0b11-7e2e-4592-9ffc-9851bc16e811-utilities\") pod \"redhat-marketplace-cl4bm\" (UID: \"38ed0b11-7e2e-4592-9ffc-9851bc16e811\") " pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.601590 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9grp\" (UniqueName: \"kubernetes.io/projected/38ed0b11-7e2e-4592-9ffc-9851bc16e811-kube-api-access-x9grp\") pod \"redhat-marketplace-cl4bm\" (UID: \"38ed0b11-7e2e-4592-9ffc-9851bc16e811\") " pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.602123 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38ed0b11-7e2e-4592-9ffc-9851bc16e811-catalog-content\") pod \"redhat-marketplace-cl4bm\" (UID: \"38ed0b11-7e2e-4592-9ffc-9851bc16e811\") " pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.602135 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38ed0b11-7e2e-4592-9ffc-9851bc16e811-utilities\") pod \"redhat-marketplace-cl4bm\" (UID: \"38ed0b11-7e2e-4592-9ffc-9851bc16e811\") " pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.627381 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9grp\" (UniqueName: \"kubernetes.io/projected/38ed0b11-7e2e-4592-9ffc-9851bc16e811-kube-api-access-x9grp\") pod \"redhat-marketplace-cl4bm\" (UID: \"38ed0b11-7e2e-4592-9ffc-9851bc16e811\") " pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:04:51 crc kubenswrapper[5024]: I1128 17:04:51.738202 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.179740 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cl4bm"] Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.207704 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl4bm" event={"ID":"38ed0b11-7e2e-4592-9ffc-9851bc16e811","Type":"ContainerStarted","Data":"8d1074461283cd42ee4775159737d7ad602607125b284420aa9a99aebb400b23"} Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.389270 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rmgv2"] Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.390864 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.393796 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.399958 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmgv2"] Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.424936 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82167b6a-2e43-4adb-9b4a-7c4d53f65979-utilities\") pod \"redhat-operators-rmgv2\" (UID: \"82167b6a-2e43-4adb-9b4a-7c4d53f65979\") " pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.425008 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4tb9\" (UniqueName: \"kubernetes.io/projected/82167b6a-2e43-4adb-9b4a-7c4d53f65979-kube-api-access-w4tb9\") pod \"redhat-operators-rmgv2\" (UID: \"82167b6a-2e43-4adb-9b4a-7c4d53f65979\") " pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.425067 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82167b6a-2e43-4adb-9b4a-7c4d53f65979-catalog-content\") pod \"redhat-operators-rmgv2\" (UID: \"82167b6a-2e43-4adb-9b4a-7c4d53f65979\") " pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.507549 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a0db523-f690-4c23-8324-b417a8ccd4b2" path="/var/lib/kubelet/pods/2a0db523-f690-4c23-8324-b417a8ccd4b2/volumes" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.508427 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80a843cd-6141-431e-83c1-a7ce0110e31f" path="/var/lib/kubelet/pods/80a843cd-6141-431e-83c1-a7ce0110e31f/volumes" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.509133 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fae0fa8-8183-4e44-afed-63a655dd82c5" path="/var/lib/kubelet/pods/8fae0fa8-8183-4e44-afed-63a655dd82c5/volumes" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.526509 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82167b6a-2e43-4adb-9b4a-7c4d53f65979-utilities\") pod \"redhat-operators-rmgv2\" (UID: \"82167b6a-2e43-4adb-9b4a-7c4d53f65979\") " pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.526675 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4tb9\" (UniqueName: \"kubernetes.io/projected/82167b6a-2e43-4adb-9b4a-7c4d53f65979-kube-api-access-w4tb9\") pod \"redhat-operators-rmgv2\" (UID: \"82167b6a-2e43-4adb-9b4a-7c4d53f65979\") " pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.526773 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82167b6a-2e43-4adb-9b4a-7c4d53f65979-catalog-content\") pod \"redhat-operators-rmgv2\" (UID: \"82167b6a-2e43-4adb-9b4a-7c4d53f65979\") " pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.527336 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82167b6a-2e43-4adb-9b4a-7c4d53f65979-utilities\") pod \"redhat-operators-rmgv2\" (UID: \"82167b6a-2e43-4adb-9b4a-7c4d53f65979\") " pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.527419 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82167b6a-2e43-4adb-9b4a-7c4d53f65979-catalog-content\") pod \"redhat-operators-rmgv2\" (UID: \"82167b6a-2e43-4adb-9b4a-7c4d53f65979\") " pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.558064 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4tb9\" (UniqueName: \"kubernetes.io/projected/82167b6a-2e43-4adb-9b4a-7c4d53f65979-kube-api-access-w4tb9\") pod \"redhat-operators-rmgv2\" (UID: \"82167b6a-2e43-4adb-9b4a-7c4d53f65979\") " pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:04:52 crc kubenswrapper[5024]: I1128 17:04:52.751466 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:04:53 crc kubenswrapper[5024]: I1128 17:04:53.170941 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmgv2"] Nov 28 17:04:53 crc kubenswrapper[5024]: I1128 17:04:53.224526 5024 generic.go:334] "Generic (PLEG): container finished" podID="38ed0b11-7e2e-4592-9ffc-9851bc16e811" containerID="276646a09a628c620efdd775699614907a38fdece3dce02bb3ec9352e3798123" exitCode=0 Nov 28 17:04:53 crc kubenswrapper[5024]: I1128 17:04:53.225125 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl4bm" event={"ID":"38ed0b11-7e2e-4592-9ffc-9851bc16e811","Type":"ContainerDied","Data":"276646a09a628c620efdd775699614907a38fdece3dce02bb3ec9352e3798123"} Nov 28 17:04:53 crc kubenswrapper[5024]: I1128 17:04:53.226884 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmgv2" event={"ID":"82167b6a-2e43-4adb-9b4a-7c4d53f65979","Type":"ContainerStarted","Data":"3ef3c596ae9558e2e72896e5981eb5f5b0545bb2d06aa913995136af9f924d0a"} Nov 28 17:04:53 crc kubenswrapper[5024]: I1128 17:04:53.989988 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kcnqq"] Nov 28 17:04:53 crc kubenswrapper[5024]: I1128 17:04:53.991776 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:04:53 crc kubenswrapper[5024]: I1128 17:04:53.993662 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 17:04:54 crc kubenswrapper[5024]: I1128 17:04:54.016977 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kcnqq"] Nov 28 17:04:54 crc kubenswrapper[5024]: I1128 17:04:54.054087 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5d05e6a-edfa-4707-959c-c3997debbed1-catalog-content\") pod \"certified-operators-kcnqq\" (UID: \"a5d05e6a-edfa-4707-959c-c3997debbed1\") " pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:04:54 crc kubenswrapper[5024]: I1128 17:04:54.054249 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5d05e6a-edfa-4707-959c-c3997debbed1-utilities\") pod \"certified-operators-kcnqq\" (UID: \"a5d05e6a-edfa-4707-959c-c3997debbed1\") " pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:04:54 crc kubenswrapper[5024]: I1128 17:04:54.054328 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flgtp\" (UniqueName: \"kubernetes.io/projected/a5d05e6a-edfa-4707-959c-c3997debbed1-kube-api-access-flgtp\") pod \"certified-operators-kcnqq\" (UID: \"a5d05e6a-edfa-4707-959c-c3997debbed1\") " pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:04:54 crc kubenswrapper[5024]: I1128 17:04:54.155488 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flgtp\" (UniqueName: \"kubernetes.io/projected/a5d05e6a-edfa-4707-959c-c3997debbed1-kube-api-access-flgtp\") pod \"certified-operators-kcnqq\" (UID: \"a5d05e6a-edfa-4707-959c-c3997debbed1\") " pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:04:54 crc kubenswrapper[5024]: I1128 17:04:54.156120 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5d05e6a-edfa-4707-959c-c3997debbed1-catalog-content\") pod \"certified-operators-kcnqq\" (UID: \"a5d05e6a-edfa-4707-959c-c3997debbed1\") " pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:04:54 crc kubenswrapper[5024]: I1128 17:04:54.156178 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5d05e6a-edfa-4707-959c-c3997debbed1-utilities\") pod \"certified-operators-kcnqq\" (UID: \"a5d05e6a-edfa-4707-959c-c3997debbed1\") " pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:04:54 crc kubenswrapper[5024]: I1128 17:04:54.156845 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5d05e6a-edfa-4707-959c-c3997debbed1-utilities\") pod \"certified-operators-kcnqq\" (UID: \"a5d05e6a-edfa-4707-959c-c3997debbed1\") " pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:04:54 crc kubenswrapper[5024]: I1128 17:04:54.156916 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5d05e6a-edfa-4707-959c-c3997debbed1-catalog-content\") pod \"certified-operators-kcnqq\" (UID: \"a5d05e6a-edfa-4707-959c-c3997debbed1\") " pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:04:54 crc kubenswrapper[5024]: I1128 17:04:54.185116 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flgtp\" (UniqueName: \"kubernetes.io/projected/a5d05e6a-edfa-4707-959c-c3997debbed1-kube-api-access-flgtp\") pod \"certified-operators-kcnqq\" (UID: \"a5d05e6a-edfa-4707-959c-c3997debbed1\") " pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:04:54 crc kubenswrapper[5024]: I1128 17:04:54.236675 5024 generic.go:334] "Generic (PLEG): container finished" podID="82167b6a-2e43-4adb-9b4a-7c4d53f65979" containerID="7a77122449eb508754760f3b0471bf80577959cc7fe9a0b94af7e2738315b122" exitCode=0 Nov 28 17:04:54 crc kubenswrapper[5024]: I1128 17:04:54.236741 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmgv2" event={"ID":"82167b6a-2e43-4adb-9b4a-7c4d53f65979","Type":"ContainerDied","Data":"7a77122449eb508754760f3b0471bf80577959cc7fe9a0b94af7e2738315b122"} Nov 28 17:04:54 crc kubenswrapper[5024]: I1128 17:04:54.315173 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:04:54 crc kubenswrapper[5024]: I1128 17:04:54.733482 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kcnqq"] Nov 28 17:04:54 crc kubenswrapper[5024]: W1128 17:04:54.736927 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5d05e6a_edfa_4707_959c_c3997debbed1.slice/crio-346b10e8d8f800788a2657c06e2dfd7108fdb00afc95744844c8c878a32ceca2 WatchSource:0}: Error finding container 346b10e8d8f800788a2657c06e2dfd7108fdb00afc95744844c8c878a32ceca2: Status 404 returned error can't find the container with id 346b10e8d8f800788a2657c06e2dfd7108fdb00afc95744844c8c878a32ceca2 Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.195735 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pr8z6"] Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.197356 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.200892 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.209940 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pr8z6"] Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.247109 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmgv2" event={"ID":"82167b6a-2e43-4adb-9b4a-7c4d53f65979","Type":"ContainerStarted","Data":"fe3f968f9c0b3d7b4b0ccde4bd0433ca8b2badc4e00d60b4ead8fbb88edd7160"} Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.250845 5024 generic.go:334] "Generic (PLEG): container finished" podID="38ed0b11-7e2e-4592-9ffc-9851bc16e811" containerID="72149d6eb87cabbc022642606cedb457447e385d9831193ac6a8a807964a486f" exitCode=0 Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.250969 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl4bm" event={"ID":"38ed0b11-7e2e-4592-9ffc-9851bc16e811","Type":"ContainerDied","Data":"72149d6eb87cabbc022642606cedb457447e385d9831193ac6a8a807964a486f"} Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.254180 5024 generic.go:334] "Generic (PLEG): container finished" podID="a5d05e6a-edfa-4707-959c-c3997debbed1" containerID="0e0bfd592e07d2f7085b042406b0a8f80e50b56dd98f425e59d4ab21a584716b" exitCode=0 Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.255814 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcnqq" event={"ID":"a5d05e6a-edfa-4707-959c-c3997debbed1","Type":"ContainerDied","Data":"0e0bfd592e07d2f7085b042406b0a8f80e50b56dd98f425e59d4ab21a584716b"} Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.255856 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcnqq" event={"ID":"a5d05e6a-edfa-4707-959c-c3997debbed1","Type":"ContainerStarted","Data":"346b10e8d8f800788a2657c06e2dfd7108fdb00afc95744844c8c878a32ceca2"} Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.273923 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-utilities\") pod \"community-operators-pr8z6\" (UID: \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\") " pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.273988 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-catalog-content\") pod \"community-operators-pr8z6\" (UID: \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\") " pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.274034 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxn4l\" (UniqueName: \"kubernetes.io/projected/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-kube-api-access-xxn4l\") pod \"community-operators-pr8z6\" (UID: \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\") " pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.375524 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-utilities\") pod \"community-operators-pr8z6\" (UID: \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\") " pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.375592 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-catalog-content\") pod \"community-operators-pr8z6\" (UID: \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\") " pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.375611 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxn4l\" (UniqueName: \"kubernetes.io/projected/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-kube-api-access-xxn4l\") pod \"community-operators-pr8z6\" (UID: \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\") " pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.376345 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-utilities\") pod \"community-operators-pr8z6\" (UID: \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\") " pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.376365 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-catalog-content\") pod \"community-operators-pr8z6\" (UID: \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\") " pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.402715 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxn4l\" (UniqueName: \"kubernetes.io/projected/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-kube-api-access-xxn4l\") pod \"community-operators-pr8z6\" (UID: \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\") " pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.530347 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:04:55 crc kubenswrapper[5024]: I1128 17:04:55.977335 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pr8z6"] Nov 28 17:04:56 crc kubenswrapper[5024]: I1128 17:04:56.262472 5024 generic.go:334] "Generic (PLEG): container finished" podID="9c12a1e9-4dd9-4470-8343-ca7cedab2c34" containerID="6c52d99ed03db9ce779628f5a4c8811f13848fda154e2387649888ae0b5b1861" exitCode=0 Nov 28 17:04:56 crc kubenswrapper[5024]: I1128 17:04:56.262551 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pr8z6" event={"ID":"9c12a1e9-4dd9-4470-8343-ca7cedab2c34","Type":"ContainerDied","Data":"6c52d99ed03db9ce779628f5a4c8811f13848fda154e2387649888ae0b5b1861"} Nov 28 17:04:56 crc kubenswrapper[5024]: I1128 17:04:56.263167 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pr8z6" event={"ID":"9c12a1e9-4dd9-4470-8343-ca7cedab2c34","Type":"ContainerStarted","Data":"3434b4feee76e0e4451e9a70b0657d8c59b875114c2ed1d4a3d8b202da9a4917"} Nov 28 17:04:56 crc kubenswrapper[5024]: I1128 17:04:56.268727 5024 generic.go:334] "Generic (PLEG): container finished" podID="82167b6a-2e43-4adb-9b4a-7c4d53f65979" containerID="fe3f968f9c0b3d7b4b0ccde4bd0433ca8b2badc4e00d60b4ead8fbb88edd7160" exitCode=0 Nov 28 17:04:56 crc kubenswrapper[5024]: I1128 17:04:56.268863 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmgv2" event={"ID":"82167b6a-2e43-4adb-9b4a-7c4d53f65979","Type":"ContainerDied","Data":"fe3f968f9c0b3d7b4b0ccde4bd0433ca8b2badc4e00d60b4ead8fbb88edd7160"} Nov 28 17:04:56 crc kubenswrapper[5024]: I1128 17:04:56.280211 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cl4bm" event={"ID":"38ed0b11-7e2e-4592-9ffc-9851bc16e811","Type":"ContainerStarted","Data":"034161f02e358940023a1bd6af46866d7d3389a714f783adb319beec223d9b67"} Nov 28 17:04:56 crc kubenswrapper[5024]: I1128 17:04:56.307823 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cl4bm" podStartSLOduration=2.535914447 podStartE2EDuration="5.307786378s" podCreationTimestamp="2025-11-28 17:04:51 +0000 UTC" firstStartedPulling="2025-11-28 17:04:53.226844096 +0000 UTC m=+395.275765001" lastFinishedPulling="2025-11-28 17:04:55.998716027 +0000 UTC m=+398.047636932" observedRunningTime="2025-11-28 17:04:56.302168729 +0000 UTC m=+398.351089634" watchObservedRunningTime="2025-11-28 17:04:56.307786378 +0000 UTC m=+398.356707283" Nov 28 17:04:57 crc kubenswrapper[5024]: I1128 17:04:57.290225 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmgv2" event={"ID":"82167b6a-2e43-4adb-9b4a-7c4d53f65979","Type":"ContainerStarted","Data":"ccc78e9b20294e22d95ba0a4e81b3f25ce79ac8d278b7d7561d1c8c9bdbce9a5"} Nov 28 17:04:57 crc kubenswrapper[5024]: I1128 17:04:57.292807 5024 generic.go:334] "Generic (PLEG): container finished" podID="a5d05e6a-edfa-4707-959c-c3997debbed1" containerID="7817104fcdf47d8678ee0fb55a2ad20ac63a50a76306540d0a6ceb81c823e546" exitCode=0 Nov 28 17:04:57 crc kubenswrapper[5024]: I1128 17:04:57.292881 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcnqq" event={"ID":"a5d05e6a-edfa-4707-959c-c3997debbed1","Type":"ContainerDied","Data":"7817104fcdf47d8678ee0fb55a2ad20ac63a50a76306540d0a6ceb81c823e546"} Nov 28 17:04:57 crc kubenswrapper[5024]: I1128 17:04:57.393537 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rmgv2" podStartSLOduration=2.890956615 podStartE2EDuration="5.393516536s" podCreationTimestamp="2025-11-28 17:04:52 +0000 UTC" firstStartedPulling="2025-11-28 17:04:54.243802574 +0000 UTC m=+396.292723479" lastFinishedPulling="2025-11-28 17:04:56.746362485 +0000 UTC m=+398.795283400" observedRunningTime="2025-11-28 17:04:57.3161476 +0000 UTC m=+399.365068505" watchObservedRunningTime="2025-11-28 17:04:57.393516536 +0000 UTC m=+399.442437441" Nov 28 17:04:58 crc kubenswrapper[5024]: I1128 17:04:58.300819 5024 generic.go:334] "Generic (PLEG): container finished" podID="9c12a1e9-4dd9-4470-8343-ca7cedab2c34" containerID="b7dccb87d369b223a27ca44796ae2223983a07dcfceb0980890259e7accbc225" exitCode=0 Nov 28 17:04:58 crc kubenswrapper[5024]: I1128 17:04:58.300858 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pr8z6" event={"ID":"9c12a1e9-4dd9-4470-8343-ca7cedab2c34","Type":"ContainerDied","Data":"b7dccb87d369b223a27ca44796ae2223983a07dcfceb0980890259e7accbc225"} Nov 28 17:04:59 crc kubenswrapper[5024]: I1128 17:04:59.312554 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcnqq" event={"ID":"a5d05e6a-edfa-4707-959c-c3997debbed1","Type":"ContainerStarted","Data":"588bed846a220a168eb3e4c654d5df623dd6f6cff7239a47ce4bd3543436b15f"} Nov 28 17:05:01 crc kubenswrapper[5024]: I1128 17:05:01.327083 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pr8z6" event={"ID":"9c12a1e9-4dd9-4470-8343-ca7cedab2c34","Type":"ContainerStarted","Data":"b6f8dc0ecd2375f371405c9bc6f235ebd81eaf7d0be626449f225b73abd1d30a"} Nov 28 17:05:01 crc kubenswrapper[5024]: I1128 17:05:01.349866 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pr8z6" podStartSLOduration=2.12596224 podStartE2EDuration="6.34984426s" podCreationTimestamp="2025-11-28 17:04:55 +0000 UTC" firstStartedPulling="2025-11-28 17:04:56.265158375 +0000 UTC m=+398.314079280" lastFinishedPulling="2025-11-28 17:05:00.489040395 +0000 UTC m=+402.537961300" observedRunningTime="2025-11-28 17:05:01.349237178 +0000 UTC m=+403.398158083" watchObservedRunningTime="2025-11-28 17:05:01.34984426 +0000 UTC m=+403.398765155" Nov 28 17:05:01 crc kubenswrapper[5024]: I1128 17:05:01.350442 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kcnqq" podStartSLOduration=4.599244679 podStartE2EDuration="8.350435101s" podCreationTimestamp="2025-11-28 17:04:53 +0000 UTC" firstStartedPulling="2025-11-28 17:04:55.255759855 +0000 UTC m=+397.304680760" lastFinishedPulling="2025-11-28 17:04:59.006950277 +0000 UTC m=+401.055871182" observedRunningTime="2025-11-28 17:04:59.333581191 +0000 UTC m=+401.382502096" watchObservedRunningTime="2025-11-28 17:05:01.350435101 +0000 UTC m=+403.399356006" Nov 28 17:05:01 crc kubenswrapper[5024]: I1128 17:05:01.739359 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:05:01 crc kubenswrapper[5024]: I1128 17:05:01.739745 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:05:01 crc kubenswrapper[5024]: I1128 17:05:01.787013 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:05:02 crc kubenswrapper[5024]: I1128 17:05:02.381540 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cl4bm" Nov 28 17:05:02 crc kubenswrapper[5024]: I1128 17:05:02.752590 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:05:02 crc kubenswrapper[5024]: I1128 17:05:02.752646 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:05:02 crc kubenswrapper[5024]: I1128 17:05:02.804815 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:05:03 crc kubenswrapper[5024]: I1128 17:05:03.384708 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rmgv2" Nov 28 17:05:04 crc kubenswrapper[5024]: I1128 17:05:04.316306 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:05:04 crc kubenswrapper[5024]: I1128 17:05:04.317580 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:05:04 crc kubenswrapper[5024]: I1128 17:05:04.376695 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:05:05 crc kubenswrapper[5024]: I1128 17:05:05.400089 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:05:05 crc kubenswrapper[5024]: I1128 17:05:05.531289 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:05:05 crc kubenswrapper[5024]: I1128 17:05:05.531843 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:05:05 crc kubenswrapper[5024]: I1128 17:05:05.576821 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:05:06 crc kubenswrapper[5024]: I1128 17:05:06.408224 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:05:07 crc kubenswrapper[5024]: I1128 17:05:07.564762 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:05:07 crc kubenswrapper[5024]: I1128 17:05:07.564856 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.754888 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8bdl8"] Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.757928 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.842196 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8bdl8"] Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.863708 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4a7d7b1-5066-438a-9028-176a29d0ba58-registry-tls\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.863770 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4a7d7b1-5066-438a-9028-176a29d0ba58-trusted-ca\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.863819 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.864045 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4a7d7b1-5066-438a-9028-176a29d0ba58-bound-sa-token\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.864182 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4lgm\" (UniqueName: \"kubernetes.io/projected/c4a7d7b1-5066-438a-9028-176a29d0ba58-kube-api-access-j4lgm\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.864342 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4a7d7b1-5066-438a-9028-176a29d0ba58-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.864556 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4a7d7b1-5066-438a-9028-176a29d0ba58-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.864625 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4a7d7b1-5066-438a-9028-176a29d0ba58-registry-certificates\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.891264 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.965503 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4a7d7b1-5066-438a-9028-176a29d0ba58-registry-tls\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.965551 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4a7d7b1-5066-438a-9028-176a29d0ba58-trusted-ca\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.965625 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4a7d7b1-5066-438a-9028-176a29d0ba58-bound-sa-token\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.965647 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4lgm\" (UniqueName: \"kubernetes.io/projected/c4a7d7b1-5066-438a-9028-176a29d0ba58-kube-api-access-j4lgm\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.965673 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4a7d7b1-5066-438a-9028-176a29d0ba58-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.965725 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4a7d7b1-5066-438a-9028-176a29d0ba58-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.965757 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4a7d7b1-5066-438a-9028-176a29d0ba58-registry-certificates\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.967458 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4a7d7b1-5066-438a-9028-176a29d0ba58-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.967985 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4a7d7b1-5066-438a-9028-176a29d0ba58-registry-certificates\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.968680 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4a7d7b1-5066-438a-9028-176a29d0ba58-trusted-ca\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.975034 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4a7d7b1-5066-438a-9028-176a29d0ba58-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.975166 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4a7d7b1-5066-438a-9028-176a29d0ba58-registry-tls\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.982981 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4a7d7b1-5066-438a-9028-176a29d0ba58-bound-sa-token\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:12 crc kubenswrapper[5024]: I1128 17:05:12.984769 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4lgm\" (UniqueName: \"kubernetes.io/projected/c4a7d7b1-5066-438a-9028-176a29d0ba58-kube-api-access-j4lgm\") pod \"image-registry-66df7c8f76-8bdl8\" (UID: \"c4a7d7b1-5066-438a-9028-176a29d0ba58\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:13 crc kubenswrapper[5024]: I1128 17:05:13.078621 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:13 crc kubenswrapper[5024]: I1128 17:05:13.575578 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8bdl8"] Nov 28 17:05:14 crc kubenswrapper[5024]: I1128 17:05:14.409148 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" event={"ID":"c4a7d7b1-5066-438a-9028-176a29d0ba58","Type":"ContainerStarted","Data":"433364ae72c8f5eb7911a72b7aa23b75934b7193342e70d3b75978220734d101"} Nov 28 17:05:14 crc kubenswrapper[5024]: I1128 17:05:14.409488 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" event={"ID":"c4a7d7b1-5066-438a-9028-176a29d0ba58","Type":"ContainerStarted","Data":"048c3eabafe75f7904f305ecda4ed2d9c91cc7928b1ab33a68154c465c44a245"} Nov 28 17:05:14 crc kubenswrapper[5024]: I1128 17:05:14.409509 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:14 crc kubenswrapper[5024]: I1128 17:05:14.438649 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" podStartSLOduration=2.438625248 podStartE2EDuration="2.438625248s" podCreationTimestamp="2025-11-28 17:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:05:14.435745086 +0000 UTC m=+416.484666001" watchObservedRunningTime="2025-11-28 17:05:14.438625248 +0000 UTC m=+416.487546153" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.104282 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df"] Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.106370 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.108612 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.109941 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.110651 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.111110 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.112724 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.119375 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df"] Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.202713 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x4xk\" (UniqueName: \"kubernetes.io/projected/f1d25dc0-5319-495a-849f-47d47f2f8628-kube-api-access-8x4xk\") pod \"cluster-monitoring-operator-6d5b84845-ww7df\" (UID: \"f1d25dc0-5319-495a-849f-47d47f2f8628\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.202923 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f1d25dc0-5319-495a-849f-47d47f2f8628-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-ww7df\" (UID: \"f1d25dc0-5319-495a-849f-47d47f2f8628\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.203241 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f1d25dc0-5319-495a-849f-47d47f2f8628-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-ww7df\" (UID: \"f1d25dc0-5319-495a-849f-47d47f2f8628\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.305423 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f1d25dc0-5319-495a-849f-47d47f2f8628-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-ww7df\" (UID: \"f1d25dc0-5319-495a-849f-47d47f2f8628\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.305507 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f1d25dc0-5319-495a-849f-47d47f2f8628-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-ww7df\" (UID: \"f1d25dc0-5319-495a-849f-47d47f2f8628\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.305559 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x4xk\" (UniqueName: \"kubernetes.io/projected/f1d25dc0-5319-495a-849f-47d47f2f8628-kube-api-access-8x4xk\") pod \"cluster-monitoring-operator-6d5b84845-ww7df\" (UID: \"f1d25dc0-5319-495a-849f-47d47f2f8628\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.306734 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f1d25dc0-5319-495a-849f-47d47f2f8628-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-ww7df\" (UID: \"f1d25dc0-5319-495a-849f-47d47f2f8628\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.312569 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f1d25dc0-5319-495a-849f-47d47f2f8628-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-ww7df\" (UID: \"f1d25dc0-5319-495a-849f-47d47f2f8628\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.324448 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x4xk\" (UniqueName: \"kubernetes.io/projected/f1d25dc0-5319-495a-849f-47d47f2f8628-kube-api-access-8x4xk\") pod \"cluster-monitoring-operator-6d5b84845-ww7df\" (UID: \"f1d25dc0-5319-495a-849f-47d47f2f8628\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.474711 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df" Nov 28 17:05:22 crc kubenswrapper[5024]: I1128 17:05:22.896266 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df"] Nov 28 17:05:22 crc kubenswrapper[5024]: W1128 17:05:22.897066 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1d25dc0_5319_495a_849f_47d47f2f8628.slice/crio-bdd4460341edf34e47b1f9a49bc54bef99f77a33942f084933b832d51f131310 WatchSource:0}: Error finding container bdd4460341edf34e47b1f9a49bc54bef99f77a33942f084933b832d51f131310: Status 404 returned error can't find the container with id bdd4460341edf34e47b1f9a49bc54bef99f77a33942f084933b832d51f131310 Nov 28 17:05:23 crc kubenswrapper[5024]: I1128 17:05:23.465592 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df" event={"ID":"f1d25dc0-5319-495a-849f-47d47f2f8628","Type":"ContainerStarted","Data":"bdd4460341edf34e47b1f9a49bc54bef99f77a33942f084933b832d51f131310"} Nov 28 17:05:25 crc kubenswrapper[5024]: I1128 17:05:25.482088 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df" event={"ID":"f1d25dc0-5319-495a-849f-47d47f2f8628","Type":"ContainerStarted","Data":"e1183aea02278fbb2c6b80a28819e1b7213f91a1fc673089d26afd187bbe24ae"} Nov 28 17:05:25 crc kubenswrapper[5024]: I1128 17:05:25.527624 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-ww7df" podStartSLOduration=1.639869754 podStartE2EDuration="3.527601571s" podCreationTimestamp="2025-11-28 17:05:22 +0000 UTC" firstStartedPulling="2025-11-28 17:05:22.899519455 +0000 UTC m=+424.948440360" lastFinishedPulling="2025-11-28 17:05:24.787251272 +0000 UTC m=+426.836172177" observedRunningTime="2025-11-28 17:05:25.503387381 +0000 UTC m=+427.552308286" watchObservedRunningTime="2025-11-28 17:05:25.527601571 +0000 UTC m=+427.576522486" Nov 28 17:05:25 crc kubenswrapper[5024]: I1128 17:05:25.541921 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rncq6"] Nov 28 17:05:25 crc kubenswrapper[5024]: I1128 17:05:25.543958 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rncq6" Nov 28 17:05:25 crc kubenswrapper[5024]: I1128 17:05:25.548515 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Nov 28 17:05:25 crc kubenswrapper[5024]: I1128 17:05:25.548792 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-vpz22" Nov 28 17:05:25 crc kubenswrapper[5024]: I1128 17:05:25.551753 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rncq6"] Nov 28 17:05:25 crc kubenswrapper[5024]: I1128 17:05:25.669260 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/80723c72-9962-4fd5-b0e6-80b184d08931-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-rncq6\" (UID: \"80723c72-9962-4fd5-b0e6-80b184d08931\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rncq6" Nov 28 17:05:25 crc kubenswrapper[5024]: I1128 17:05:25.771613 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/80723c72-9962-4fd5-b0e6-80b184d08931-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-rncq6\" (UID: \"80723c72-9962-4fd5-b0e6-80b184d08931\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rncq6" Nov 28 17:05:25 crc kubenswrapper[5024]: I1128 17:05:25.779826 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/80723c72-9962-4fd5-b0e6-80b184d08931-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-rncq6\" (UID: \"80723c72-9962-4fd5-b0e6-80b184d08931\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rncq6" Nov 28 17:05:25 crc kubenswrapper[5024]: I1128 17:05:25.877436 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rncq6" Nov 28 17:05:26 crc kubenswrapper[5024]: I1128 17:05:26.276208 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rncq6"] Nov 28 17:05:26 crc kubenswrapper[5024]: W1128 17:05:26.278251 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80723c72_9962_4fd5_b0e6_80b184d08931.slice/crio-7feadef4c5e5ade11a9e8eb999db1b06da7256ba345c9f322445123072aaa19d WatchSource:0}: Error finding container 7feadef4c5e5ade11a9e8eb999db1b06da7256ba345c9f322445123072aaa19d: Status 404 returned error can't find the container with id 7feadef4c5e5ade11a9e8eb999db1b06da7256ba345c9f322445123072aaa19d Nov 28 17:05:26 crc kubenswrapper[5024]: I1128 17:05:26.490217 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rncq6" event={"ID":"80723c72-9962-4fd5-b0e6-80b184d08931","Type":"ContainerStarted","Data":"7feadef4c5e5ade11a9e8eb999db1b06da7256ba345c9f322445123072aaa19d"} Nov 28 17:05:28 crc kubenswrapper[5024]: I1128 17:05:28.505586 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rncq6" Nov 28 17:05:28 crc kubenswrapper[5024]: I1128 17:05:28.505992 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rncq6" event={"ID":"80723c72-9962-4fd5-b0e6-80b184d08931","Type":"ContainerStarted","Data":"4c9e8cc656f375f016b3b9b11a1453fd290234d99ba62bfa07673a682b99a96c"} Nov 28 17:05:28 crc kubenswrapper[5024]: I1128 17:05:28.512097 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rncq6" Nov 28 17:05:28 crc kubenswrapper[5024]: I1128 17:05:28.539753 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rncq6" podStartSLOduration=1.9241280330000001 podStartE2EDuration="3.53973316s" podCreationTimestamp="2025-11-28 17:05:25 +0000 UTC" firstStartedPulling="2025-11-28 17:05:26.280384542 +0000 UTC m=+428.329305447" lastFinishedPulling="2025-11-28 17:05:27.895989669 +0000 UTC m=+429.944910574" observedRunningTime="2025-11-28 17:05:28.535415817 +0000 UTC m=+430.584336742" watchObservedRunningTime="2025-11-28 17:05:28.53973316 +0000 UTC m=+430.588654065" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.608642 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-z9v8x"] Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.609676 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.612586 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.612622 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.614489 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.614582 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-gbgbq" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.625515 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-z9v8x"] Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.757986 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/838e212a-59bb-47cf-bcc8-826acfe02a14-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-z9v8x\" (UID: \"838e212a-59bb-47cf-bcc8-826acfe02a14\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.758064 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/838e212a-59bb-47cf-bcc8-826acfe02a14-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-z9v8x\" (UID: \"838e212a-59bb-47cf-bcc8-826acfe02a14\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.758102 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t27l6\" (UniqueName: \"kubernetes.io/projected/838e212a-59bb-47cf-bcc8-826acfe02a14-kube-api-access-t27l6\") pod \"prometheus-operator-db54df47d-z9v8x\" (UID: \"838e212a-59bb-47cf-bcc8-826acfe02a14\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.758131 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/838e212a-59bb-47cf-bcc8-826acfe02a14-metrics-client-ca\") pod \"prometheus-operator-db54df47d-z9v8x\" (UID: \"838e212a-59bb-47cf-bcc8-826acfe02a14\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.859340 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/838e212a-59bb-47cf-bcc8-826acfe02a14-metrics-client-ca\") pod \"prometheus-operator-db54df47d-z9v8x\" (UID: \"838e212a-59bb-47cf-bcc8-826acfe02a14\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.859473 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/838e212a-59bb-47cf-bcc8-826acfe02a14-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-z9v8x\" (UID: \"838e212a-59bb-47cf-bcc8-826acfe02a14\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.859521 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/838e212a-59bb-47cf-bcc8-826acfe02a14-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-z9v8x\" (UID: \"838e212a-59bb-47cf-bcc8-826acfe02a14\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.859572 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t27l6\" (UniqueName: \"kubernetes.io/projected/838e212a-59bb-47cf-bcc8-826acfe02a14-kube-api-access-t27l6\") pod \"prometheus-operator-db54df47d-z9v8x\" (UID: \"838e212a-59bb-47cf-bcc8-826acfe02a14\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.861214 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/838e212a-59bb-47cf-bcc8-826acfe02a14-metrics-client-ca\") pod \"prometheus-operator-db54df47d-z9v8x\" (UID: \"838e212a-59bb-47cf-bcc8-826acfe02a14\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.867974 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/838e212a-59bb-47cf-bcc8-826acfe02a14-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-z9v8x\" (UID: \"838e212a-59bb-47cf-bcc8-826acfe02a14\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.873157 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/838e212a-59bb-47cf-bcc8-826acfe02a14-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-z9v8x\" (UID: \"838e212a-59bb-47cf-bcc8-826acfe02a14\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.882440 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t27l6\" (UniqueName: \"kubernetes.io/projected/838e212a-59bb-47cf-bcc8-826acfe02a14-kube-api-access-t27l6\") pod \"prometheus-operator-db54df47d-z9v8x\" (UID: \"838e212a-59bb-47cf-bcc8-826acfe02a14\") " pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" Nov 28 17:05:29 crc kubenswrapper[5024]: I1128 17:05:29.928148 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" Nov 28 17:05:30 crc kubenswrapper[5024]: I1128 17:05:30.382761 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-z9v8x"] Nov 28 17:05:30 crc kubenswrapper[5024]: I1128 17:05:30.522007 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" event={"ID":"838e212a-59bb-47cf-bcc8-826acfe02a14","Type":"ContainerStarted","Data":"29f3bf3bc914697f2ad3edb470391efb9da9da0c2678d334d127a4e75c4e9efa"} Nov 28 17:05:33 crc kubenswrapper[5024]: I1128 17:05:33.085461 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-8bdl8" Nov 28 17:05:33 crc kubenswrapper[5024]: I1128 17:05:33.156241 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n4vqb"] Nov 28 17:05:34 crc kubenswrapper[5024]: I1128 17:05:34.557013 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" event={"ID":"838e212a-59bb-47cf-bcc8-826acfe02a14","Type":"ContainerStarted","Data":"aea8bb9153423fb14ffb686bc1e7d1ba4b8cd9be43fa7f2e20d88c4ba22199be"} Nov 28 17:05:34 crc kubenswrapper[5024]: I1128 17:05:34.557608 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" event={"ID":"838e212a-59bb-47cf-bcc8-826acfe02a14","Type":"ContainerStarted","Data":"47325dddb908d4c7ebfd7746b5f0464ed08d6c472f709e06f6f2f20e242f3b6b"} Nov 28 17:05:34 crc kubenswrapper[5024]: I1128 17:05:34.582209 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-z9v8x" podStartSLOduration=2.586951843 podStartE2EDuration="5.58214695s" podCreationTimestamp="2025-11-28 17:05:29 +0000 UTC" firstStartedPulling="2025-11-28 17:05:30.398087533 +0000 UTC m=+432.447008438" lastFinishedPulling="2025-11-28 17:05:33.39328264 +0000 UTC m=+435.442203545" observedRunningTime="2025-11-28 17:05:34.573202233 +0000 UTC m=+436.622123178" watchObservedRunningTime="2025-11-28 17:05:34.58214695 +0000 UTC m=+436.631067855" Nov 28 17:05:36 crc kubenswrapper[5024]: I1128 17:05:36.983999 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl"] Nov 28 17:05:36 crc kubenswrapper[5024]: I1128 17:05:36.985667 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" Nov 28 17:05:36 crc kubenswrapper[5024]: I1128 17:05:36.988212 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Nov 28 17:05:36 crc kubenswrapper[5024]: I1128 17:05:36.988407 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Nov 28 17:05:36 crc kubenswrapper[5024]: I1128 17:05:36.988481 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-x9hgw" Nov 28 17:05:36 crc kubenswrapper[5024]: I1128 17:05:36.995402 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t"] Nov 28 17:05:36 crc kubenswrapper[5024]: I1128 17:05:36.996649 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.002454 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.002635 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.002842 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.002560 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-5lzbw" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.015094 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl"] Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.030580 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t"] Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.059262 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-zpq4x"] Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.078028 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.080647 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9f986886-944f-4d57-9ffd-c4c8699d7062-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-d5jxl\" (UID: \"9f986886-944f-4d57-9ffd-c4c8699d7062\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.080698 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/37f90217-3ea7-45a4-a9f2-a40cf11a677c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.081462 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-jx7v7" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.081665 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.083364 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/37f90217-3ea7-45a4-a9f2-a40cf11a677c-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.083607 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f986886-944f-4d57-9ffd-c4c8699d7062-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-d5jxl\" (UID: \"9f986886-944f-4d57-9ffd-c4c8699d7062\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.083642 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvhl9\" (UniqueName: \"kubernetes.io/projected/9f986886-944f-4d57-9ffd-c4c8699d7062-kube-api-access-xvhl9\") pod \"openshift-state-metrics-566fddb674-d5jxl\" (UID: \"9f986886-944f-4d57-9ffd-c4c8699d7062\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.083708 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/37f90217-3ea7-45a4-a9f2-a40cf11a677c-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.083820 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/37f90217-3ea7-45a4-a9f2-a40cf11a677c-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.083846 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8svm\" (UniqueName: \"kubernetes.io/projected/37f90217-3ea7-45a4-a9f2-a40cf11a677c-kube-api-access-t8svm\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.083898 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/37f90217-3ea7-45a4-a9f2-a40cf11a677c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.083955 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9f986886-944f-4d57-9ffd-c4c8699d7062-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-d5jxl\" (UID: \"9f986886-944f-4d57-9ffd-c4c8699d7062\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.085530 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.185953 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9f986886-944f-4d57-9ffd-c4c8699d7062-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-d5jxl\" (UID: \"9f986886-944f-4d57-9ffd-c4c8699d7062\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186012 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/37f90217-3ea7-45a4-a9f2-a40cf11a677c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186061 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0b3e723c-fb54-400f-a61d-f3772e06753b-sys\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186124 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0b3e723c-fb54-400f-a61d-f3772e06753b-node-exporter-tls\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186153 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/37f90217-3ea7-45a4-a9f2-a40cf11a677c-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186283 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0b3e723c-fb54-400f-a61d-f3772e06753b-node-exporter-textfile\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186341 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0b3e723c-fb54-400f-a61d-f3772e06753b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186548 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f986886-944f-4d57-9ffd-c4c8699d7062-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-d5jxl\" (UID: \"9f986886-944f-4d57-9ffd-c4c8699d7062\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186604 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0b3e723c-fb54-400f-a61d-f3772e06753b-node-exporter-wtmp\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186628 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvhl9\" (UniqueName: \"kubernetes.io/projected/9f986886-944f-4d57-9ffd-c4c8699d7062-kube-api-access-xvhl9\") pod \"openshift-state-metrics-566fddb674-d5jxl\" (UID: \"9f986886-944f-4d57-9ffd-c4c8699d7062\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186690 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/37f90217-3ea7-45a4-a9f2-a40cf11a677c-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186734 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/37f90217-3ea7-45a4-a9f2-a40cf11a677c-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186744 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b3e723c-fb54-400f-a61d-f3772e06753b-metrics-client-ca\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186839 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/37f90217-3ea7-45a4-a9f2-a40cf11a677c-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186881 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8svm\" (UniqueName: \"kubernetes.io/projected/37f90217-3ea7-45a4-a9f2-a40cf11a677c-kube-api-access-t8svm\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186932 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/37f90217-3ea7-45a4-a9f2-a40cf11a677c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.186980 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0b3e723c-fb54-400f-a61d-f3772e06753b-root\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.187038 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9f986886-944f-4d57-9ffd-c4c8699d7062-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-d5jxl\" (UID: \"9f986886-944f-4d57-9ffd-c4c8699d7062\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.187070 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ws7q\" (UniqueName: \"kubernetes.io/projected/0b3e723c-fb54-400f-a61d-f3772e06753b-kube-api-access-2ws7q\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.187713 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/37f90217-3ea7-45a4-a9f2-a40cf11a677c-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.187803 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/37f90217-3ea7-45a4-a9f2-a40cf11a677c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.188229 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9f986886-944f-4d57-9ffd-c4c8699d7062-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-d5jxl\" (UID: \"9f986886-944f-4d57-9ffd-c4c8699d7062\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.192443 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f986886-944f-4d57-9ffd-c4c8699d7062-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-d5jxl\" (UID: \"9f986886-944f-4d57-9ffd-c4c8699d7062\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.192567 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/37f90217-3ea7-45a4-a9f2-a40cf11a677c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.193664 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9f986886-944f-4d57-9ffd-c4c8699d7062-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-d5jxl\" (UID: \"9f986886-944f-4d57-9ffd-c4c8699d7062\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.193939 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/37f90217-3ea7-45a4-a9f2-a40cf11a677c-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.206947 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8svm\" (UniqueName: \"kubernetes.io/projected/37f90217-3ea7-45a4-a9f2-a40cf11a677c-kube-api-access-t8svm\") pod \"kube-state-metrics-777cb5bd5d-qpf2t\" (UID: \"37f90217-3ea7-45a4-a9f2-a40cf11a677c\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.210931 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvhl9\" (UniqueName: \"kubernetes.io/projected/9f986886-944f-4d57-9ffd-c4c8699d7062-kube-api-access-xvhl9\") pod \"openshift-state-metrics-566fddb674-d5jxl\" (UID: \"9f986886-944f-4d57-9ffd-c4c8699d7062\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.288443 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0b3e723c-fb54-400f-a61d-f3772e06753b-sys\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.288582 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0b3e723c-fb54-400f-a61d-f3772e06753b-sys\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.288827 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0b3e723c-fb54-400f-a61d-f3772e06753b-node-exporter-tls\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.288908 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0b3e723c-fb54-400f-a61d-f3772e06753b-node-exporter-textfile\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.288995 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0b3e723c-fb54-400f-a61d-f3772e06753b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.289058 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0b3e723c-fb54-400f-a61d-f3772e06753b-node-exporter-wtmp\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.289096 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b3e723c-fb54-400f-a61d-f3772e06753b-metrics-client-ca\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.289124 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0b3e723c-fb54-400f-a61d-f3772e06753b-root\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.289143 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ws7q\" (UniqueName: \"kubernetes.io/projected/0b3e723c-fb54-400f-a61d-f3772e06753b-kube-api-access-2ws7q\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.289650 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0b3e723c-fb54-400f-a61d-f3772e06753b-root\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.289713 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0b3e723c-fb54-400f-a61d-f3772e06753b-node-exporter-wtmp\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.290183 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b3e723c-fb54-400f-a61d-f3772e06753b-metrics-client-ca\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.290200 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0b3e723c-fb54-400f-a61d-f3772e06753b-node-exporter-textfile\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.292362 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0b3e723c-fb54-400f-a61d-f3772e06753b-node-exporter-tls\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.292806 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0b3e723c-fb54-400f-a61d-f3772e06753b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.308608 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ws7q\" (UniqueName: \"kubernetes.io/projected/0b3e723c-fb54-400f-a61d-f3772e06753b-kube-api-access-2ws7q\") pod \"node-exporter-zpq4x\" (UID: \"0b3e723c-fb54-400f-a61d-f3772e06753b\") " pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.311015 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.323611 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.404802 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-zpq4x" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.564480 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.564540 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.564607 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.565336 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5cfa405463e6da44c10e5aaed39d084534cafde9adb70808f0b8a54ca8b0cfc"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.565391 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://a5cfa405463e6da44c10e5aaed39d084534cafde9adb70808f0b8a54ca8b0cfc" gracePeriod=600 Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.590251 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zpq4x" event={"ID":"0b3e723c-fb54-400f-a61d-f3772e06753b","Type":"ContainerStarted","Data":"59b6f4ae44b13ac40f17eb34cfef61d7b032f9571dc605250ac943ff6d65b733"} Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.758425 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t"] Nov 28 17:05:37 crc kubenswrapper[5024]: W1128 17:05:37.767310 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37f90217_3ea7_45a4_a9f2_a40cf11a677c.slice/crio-ec4c2c9e268673bcbce3a506ab49b91625ebc2561fe74a36f94db7eba5253d1f WatchSource:0}: Error finding container ec4c2c9e268673bcbce3a506ab49b91625ebc2561fe74a36f94db7eba5253d1f: Status 404 returned error can't find the container with id ec4c2c9e268673bcbce3a506ab49b91625ebc2561fe74a36f94db7eba5253d1f Nov 28 17:05:37 crc kubenswrapper[5024]: I1128 17:05:37.955625 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl"] Nov 28 17:05:37 crc kubenswrapper[5024]: W1128 17:05:37.968694 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f986886_944f_4d57_9ffd_c4c8699d7062.slice/crio-99e99aaa9f7b25619e0bcb8fd62b311a07ab4277712d3e5ee300dfa1b9f87dee WatchSource:0}: Error finding container 99e99aaa9f7b25619e0bcb8fd62b311a07ab4277712d3e5ee300dfa1b9f87dee: Status 404 returned error can't find the container with id 99e99aaa9f7b25619e0bcb8fd62b311a07ab4277712d3e5ee300dfa1b9f87dee Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.129604 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.132322 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: W1128 17:05:38.136943 5024 reflector.go:561] object-"openshift-monitoring"/"alertmanager-main-generated": failed to list *v1.Secret: secrets "alertmanager-main-generated" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'crc' and this object Nov 28 17:05:38 crc kubenswrapper[5024]: E1128 17:05:38.137006 5024 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"alertmanager-main-generated\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"alertmanager-main-generated\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 17:05:38 crc kubenswrapper[5024]: W1128 17:05:38.137272 5024 reflector.go:561] object-"openshift-monitoring"/"alertmanager-main-tls": failed to list *v1.Secret: secrets "alertmanager-main-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'crc' and this object Nov 28 17:05:38 crc kubenswrapper[5024]: E1128 17:05:38.137296 5024 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"alertmanager-main-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"alertmanager-main-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 17:05:38 crc kubenswrapper[5024]: W1128 17:05:38.141963 5024 reflector.go:561] object-"openshift-monitoring"/"alertmanager-main-web-config": failed to list *v1.Secret: secrets "alertmanager-main-web-config" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'crc' and this object Nov 28 17:05:38 crc kubenswrapper[5024]: E1128 17:05:38.142012 5024 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"alertmanager-main-web-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"alertmanager-main-web-config\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 17:05:38 crc kubenswrapper[5024]: W1128 17:05:38.142107 5024 reflector.go:561] object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy": failed to list *v1.Secret: secrets "alertmanager-kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'crc' and this object Nov 28 17:05:38 crc kubenswrapper[5024]: E1128 17:05:38.142122 5024 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"alertmanager-kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 17:05:38 crc kubenswrapper[5024]: W1128 17:05:38.142155 5024 reflector.go:561] object-"openshift-monitoring"/"alertmanager-main-dockercfg-fmclp": failed to list *v1.Secret: secrets "alertmanager-main-dockercfg-fmclp" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'crc' and this object Nov 28 17:05:38 crc kubenswrapper[5024]: E1128 17:05:38.142169 5024 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"alertmanager-main-dockercfg-fmclp\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"alertmanager-main-dockercfg-fmclp\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 17:05:38 crc kubenswrapper[5024]: W1128 17:05:38.142200 5024 reflector.go:561] object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "alertmanager-trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'crc' and this object Nov 28 17:05:38 crc kubenswrapper[5024]: E1128 17:05:38.142211 5024 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"alertmanager-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"alertmanager-trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 17:05:38 crc kubenswrapper[5024]: W1128 17:05:38.142239 5024 reflector.go:561] object-"openshift-monitoring"/"alertmanager-main-tls-assets-0": failed to list *v1.Secret: secrets "alertmanager-main-tls-assets-0" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'crc' and this object Nov 28 17:05:38 crc kubenswrapper[5024]: E1128 17:05:38.142248 5024 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"alertmanager-main-tls-assets-0\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"alertmanager-main-tls-assets-0\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 17:05:38 crc kubenswrapper[5024]: W1128 17:05:38.142281 5024 reflector.go:561] object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric": failed to list *v1.Secret: secrets "alertmanager-kube-rbac-proxy-metric" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'crc' and this object Nov 28 17:05:38 crc kubenswrapper[5024]: E1128 17:05:38.142291 5024 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-metric\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"alertmanager-kube-rbac-proxy-metric\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 17:05:38 crc kubenswrapper[5024]: W1128 17:05:38.143629 5024 reflector.go:561] object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web": failed to list *v1.Secret: secrets "alertmanager-kube-rbac-proxy-web" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'crc' and this object Nov 28 17:05:38 crc kubenswrapper[5024]: E1128 17:05:38.143685 5024 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-web\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"alertmanager-kube-rbac-proxy-web\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.159996 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.207437 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-config-volume\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.207490 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-web-config\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.207513 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.207530 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d057d20a-0080-46d1-9d04-adafbe6da44f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.207612 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/d057d20a-0080-46d1-9d04-adafbe6da44f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.207639 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djvtj\" (UniqueName: \"kubernetes.io/projected/d057d20a-0080-46d1-9d04-adafbe6da44f-kube-api-access-djvtj\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.207664 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d057d20a-0080-46d1-9d04-adafbe6da44f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.207688 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d057d20a-0080-46d1-9d04-adafbe6da44f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.207720 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.207741 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d057d20a-0080-46d1-9d04-adafbe6da44f-config-out\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.207759 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.207784 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.310631 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/d057d20a-0080-46d1-9d04-adafbe6da44f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.309893 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/d057d20a-0080-46d1-9d04-adafbe6da44f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.311152 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djvtj\" (UniqueName: \"kubernetes.io/projected/d057d20a-0080-46d1-9d04-adafbe6da44f-kube-api-access-djvtj\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.311182 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d057d20a-0080-46d1-9d04-adafbe6da44f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.311675 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d057d20a-0080-46d1-9d04-adafbe6da44f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.311830 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.311960 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d057d20a-0080-46d1-9d04-adafbe6da44f-config-out\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.312096 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.312148 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.312207 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-config-volume\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.312245 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-web-config\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.312271 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.312297 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d057d20a-0080-46d1-9d04-adafbe6da44f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.313376 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d057d20a-0080-46d1-9d04-adafbe6da44f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.319582 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d057d20a-0080-46d1-9d04-adafbe6da44f-config-out\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.332308 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djvtj\" (UniqueName: \"kubernetes.io/projected/d057d20a-0080-46d1-9d04-adafbe6da44f-kube-api-access-djvtj\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.609557 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" event={"ID":"37f90217-3ea7-45a4-a9f2-a40cf11a677c","Type":"ContainerStarted","Data":"ec4c2c9e268673bcbce3a506ab49b91625ebc2561fe74a36f94db7eba5253d1f"} Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.612476 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="a5cfa405463e6da44c10e5aaed39d084534cafde9adb70808f0b8a54ca8b0cfc" exitCode=0 Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.612539 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"a5cfa405463e6da44c10e5aaed39d084534cafde9adb70808f0b8a54ca8b0cfc"} Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.612622 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"b2b8407cc3bf17902050626002a98c22963b96352f4dad4e0be00a881d87b638"} Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.612659 5024 scope.go:117] "RemoveContainer" containerID="3f488348a97e479a39411bf1785072f06b276b35227609da3651e95e4cc79ca3" Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.619097 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" event={"ID":"9f986886-944f-4d57-9ffd-c4c8699d7062","Type":"ContainerStarted","Data":"a677bc6b4a93498ff0dacd64db03a4e57af3f3b9c59ae2ea26fb6c8c0f22d582"} Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.619157 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" event={"ID":"9f986886-944f-4d57-9ffd-c4c8699d7062","Type":"ContainerStarted","Data":"8f6ec5ef8003acd2ffff86e4ea2dc6d93cd04963c504ac1cd829f052c01a974c"} Nov 28 17:05:38 crc kubenswrapper[5024]: I1128 17:05:38.619173 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" event={"ID":"9f986886-944f-4d57-9ffd-c4c8699d7062","Type":"ContainerStarted","Data":"99e99aaa9f7b25619e0bcb8fd62b311a07ab4277712d3e5ee300dfa1b9f87dee"} Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.032890 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.045845 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-fmclp" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.050304 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.118393 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-9f858bf7b-mltgm"] Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.120535 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.124800 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.124913 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.125220 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-62126anhsclr5" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.125369 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.125505 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-5xpxp" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.129163 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.132171 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-9f858bf7b-mltgm"] Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.132980 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.234613 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-tls\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.234781 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.234940 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lfns\" (UniqueName: \"kubernetes.io/projected/affd52e4-2e9e-453b-bd05-b128058c012d-kube-api-access-9lfns\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.235136 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/affd52e4-2e9e-453b-bd05-b128058c012d-metrics-client-ca\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.235172 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.235205 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.235245 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.235406 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-grpc-tls\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.285783 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.300082 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:39 crc kubenswrapper[5024]: E1128 17:05:39.312851 5024 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: failed to sync secret cache: timed out waiting for the condition Nov 28 17:05:39 crc kubenswrapper[5024]: E1128 17:05:39.312872 5024 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: failed to sync secret cache: timed out waiting for the condition Nov 28 17:05:39 crc kubenswrapper[5024]: E1128 17:05:39.312894 5024 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: failed to sync secret cache: timed out waiting for the condition Nov 28 17:05:39 crc kubenswrapper[5024]: E1128 17:05:39.312851 5024 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: failed to sync secret cache: timed out waiting for the condition Nov 28 17:05:39 crc kubenswrapper[5024]: E1128 17:05:39.312941 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy podName:d057d20a-0080-46d1-9d04-adafbe6da44f nodeName:}" failed. No retries permitted until 2025-11-28 17:05:39.812916684 +0000 UTC m=+441.861837589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "d057d20a-0080-46d1-9d04-adafbe6da44f") : failed to sync secret cache: timed out waiting for the condition Nov 28 17:05:39 crc kubenswrapper[5024]: E1128 17:05:39.312963 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-web-config podName:d057d20a-0080-46d1-9d04-adafbe6da44f nodeName:}" failed. No retries permitted until 2025-11-28 17:05:39.812948765 +0000 UTC m=+441.861869670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-web-config") pod "alertmanager-main-0" (UID: "d057d20a-0080-46d1-9d04-adafbe6da44f") : failed to sync secret cache: timed out waiting for the condition Nov 28 17:05:39 crc kubenswrapper[5024]: E1128 17:05:39.312980 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d057d20a-0080-46d1-9d04-adafbe6da44f-tls-assets podName:d057d20a-0080-46d1-9d04-adafbe6da44f nodeName:}" failed. No retries permitted until 2025-11-28 17:05:39.812971616 +0000 UTC m=+441.861892521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/d057d20a-0080-46d1-9d04-adafbe6da44f-tls-assets") pod "alertmanager-main-0" (UID: "d057d20a-0080-46d1-9d04-adafbe6da44f") : failed to sync secret cache: timed out waiting for the condition Nov 28 17:05:39 crc kubenswrapper[5024]: E1128 17:05:39.312993 5024 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-generated: failed to sync secret cache: timed out waiting for the condition Nov 28 17:05:39 crc kubenswrapper[5024]: E1128 17:05:39.313013 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-config-volume podName:d057d20a-0080-46d1-9d04-adafbe6da44f nodeName:}" failed. No retries permitted until 2025-11-28 17:05:39.813008547 +0000 UTC m=+441.861929452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-config-volume") pod "alertmanager-main-0" (UID: "d057d20a-0080-46d1-9d04-adafbe6da44f") : failed to sync secret cache: timed out waiting for the condition Nov 28 17:05:39 crc kubenswrapper[5024]: E1128 17:05:39.313067 5024 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: failed to sync secret cache: timed out waiting for the condition Nov 28 17:05:39 crc kubenswrapper[5024]: E1128 17:05:39.313123 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy-web podName:d057d20a-0080-46d1-9d04-adafbe6da44f nodeName:}" failed. No retries permitted until 2025-11-28 17:05:39.813112791 +0000 UTC m=+441.862033696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "d057d20a-0080-46d1-9d04-adafbe6da44f") : failed to sync secret cache: timed out waiting for the condition Nov 28 17:05:39 crc kubenswrapper[5024]: E1128 17:05:39.313176 5024 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 28 17:05:39 crc kubenswrapper[5024]: E1128 17:05:39.313213 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d057d20a-0080-46d1-9d04-adafbe6da44f-alertmanager-trusted-ca-bundle podName:d057d20a-0080-46d1-9d04-adafbe6da44f nodeName:}" failed. No retries permitted until 2025-11-28 17:05:39.813197684 +0000 UTC m=+441.862118589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/d057d20a-0080-46d1-9d04-adafbe6da44f-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "d057d20a-0080-46d1-9d04-adafbe6da44f") : failed to sync configmap cache: timed out waiting for the condition Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.337973 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-grpc-tls\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.338141 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-tls\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.338209 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.338248 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lfns\" (UniqueName: \"kubernetes.io/projected/affd52e4-2e9e-453b-bd05-b128058c012d-kube-api-access-9lfns\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.339804 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/affd52e4-2e9e-453b-bd05-b128058c012d-metrics-client-ca\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.339846 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.339880 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.339914 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.340855 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/affd52e4-2e9e-453b-bd05-b128058c012d-metrics-client-ca\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.342611 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.344512 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.344789 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.345782 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-tls\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.346381 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-grpc-tls\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.347637 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/affd52e4-2e9e-453b-bd05-b128058c012d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.364737 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lfns\" (UniqueName: \"kubernetes.io/projected/affd52e4-2e9e-453b-bd05-b128058c012d-kube-api-access-9lfns\") pod \"thanos-querier-9f858bf7b-mltgm\" (UID: \"affd52e4-2e9e-453b-bd05-b128058c012d\") " pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.373233 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.437952 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.441136 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.453138 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.488534 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.496400 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.613977 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.852735 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d057d20a-0080-46d1-9d04-adafbe6da44f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.852785 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d057d20a-0080-46d1-9d04-adafbe6da44f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.852819 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.852869 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-config-volume\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.852910 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-web-config\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.852931 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.854493 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d057d20a-0080-46d1-9d04-adafbe6da44f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.857677 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d057d20a-0080-46d1-9d04-adafbe6da44f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.857911 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-web-config\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.858075 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-config-volume\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.858581 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.873395 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d057d20a-0080-46d1-9d04-adafbe6da44f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"d057d20a-0080-46d1-9d04-adafbe6da44f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:39 crc kubenswrapper[5024]: I1128 17:05:39.951464 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 28 17:05:40 crc kubenswrapper[5024]: I1128 17:05:40.624679 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 28 17:05:40 crc kubenswrapper[5024]: I1128 17:05:40.639364 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" event={"ID":"9f986886-944f-4d57-9ffd-c4c8699d7062","Type":"ContainerStarted","Data":"bcd4a59033d3496511239f8127496cf80c99bfed421e7da905864f6efc40cff1"} Nov 28 17:05:40 crc kubenswrapper[5024]: I1128 17:05:40.641258 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zpq4x" event={"ID":"0b3e723c-fb54-400f-a61d-f3772e06753b","Type":"ContainerStarted","Data":"341a59ce695a6fb2aed0e6395ef825f579771e5875271e9a8997f4cf5b9422a1"} Nov 28 17:05:40 crc kubenswrapper[5024]: I1128 17:05:40.644038 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" event={"ID":"37f90217-3ea7-45a4-a9f2-a40cf11a677c","Type":"ContainerStarted","Data":"e129e3a60fc823aa242eab2fa8e58e7dc3557f9018a4a283ecb6c7e47241b70c"} Nov 28 17:05:40 crc kubenswrapper[5024]: I1128 17:05:40.644103 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" event={"ID":"37f90217-3ea7-45a4-a9f2-a40cf11a677c","Type":"ContainerStarted","Data":"c0e9366df877b210a0500beeca3c97e0a7c6d5915d622ab9a0cb85dce2449d07"} Nov 28 17:05:40 crc kubenswrapper[5024]: W1128 17:05:40.644569 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd057d20a_0080_46d1_9d04_adafbe6da44f.slice/crio-1ee3926b4b036fd014b02277300d9e416c459bd5deb457472f412771b0d56dd9 WatchSource:0}: Error finding container 1ee3926b4b036fd014b02277300d9e416c459bd5deb457472f412771b0d56dd9: Status 404 returned error can't find the container with id 1ee3926b4b036fd014b02277300d9e416c459bd5deb457472f412771b0d56dd9 Nov 28 17:05:40 crc kubenswrapper[5024]: I1128 17:05:40.674254 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-d5jxl" podStartSLOduration=2.850001232 podStartE2EDuration="4.674221214s" podCreationTimestamp="2025-11-28 17:05:36 +0000 UTC" firstStartedPulling="2025-11-28 17:05:38.364705466 +0000 UTC m=+440.413626371" lastFinishedPulling="2025-11-28 17:05:40.188925448 +0000 UTC m=+442.237846353" observedRunningTime="2025-11-28 17:05:40.670408769 +0000 UTC m=+442.719329674" watchObservedRunningTime="2025-11-28 17:05:40.674221214 +0000 UTC m=+442.723142119" Nov 28 17:05:40 crc kubenswrapper[5024]: I1128 17:05:40.692897 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-9f858bf7b-mltgm"] Nov 28 17:05:40 crc kubenswrapper[5024]: W1128 17:05:40.713819 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaffd52e4_2e9e_453b_bd05_b128058c012d.slice/crio-c2f0360d8efd99f937f38654862313b9cca6ca8d217d7434c4fefb187393d20e WatchSource:0}: Error finding container c2f0360d8efd99f937f38654862313b9cca6ca8d217d7434c4fefb187393d20e: Status 404 returned error can't find the container with id c2f0360d8efd99f937f38654862313b9cca6ca8d217d7434c4fefb187393d20e Nov 28 17:05:41 crc kubenswrapper[5024]: I1128 17:05:41.657579 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" event={"ID":"affd52e4-2e9e-453b-bd05-b128058c012d","Type":"ContainerStarted","Data":"c2f0360d8efd99f937f38654862313b9cca6ca8d217d7434c4fefb187393d20e"} Nov 28 17:05:41 crc kubenswrapper[5024]: I1128 17:05:41.660804 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d057d20a-0080-46d1-9d04-adafbe6da44f","Type":"ContainerStarted","Data":"1ee3926b4b036fd014b02277300d9e416c459bd5deb457472f412771b0d56dd9"} Nov 28 17:05:41 crc kubenswrapper[5024]: I1128 17:05:41.668051 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" event={"ID":"37f90217-3ea7-45a4-a9f2-a40cf11a677c","Type":"ContainerStarted","Data":"9f9e2f4fad03be80dfcdb930246e4b67347e8eb1e356481dd975c3c6199c9375"} Nov 28 17:05:41 crc kubenswrapper[5024]: I1128 17:05:41.672483 5024 generic.go:334] "Generic (PLEG): container finished" podID="0b3e723c-fb54-400f-a61d-f3772e06753b" containerID="341a59ce695a6fb2aed0e6395ef825f579771e5875271e9a8997f4cf5b9422a1" exitCode=0 Nov 28 17:05:41 crc kubenswrapper[5024]: I1128 17:05:41.673300 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zpq4x" event={"ID":"0b3e723c-fb54-400f-a61d-f3772e06753b","Type":"ContainerDied","Data":"341a59ce695a6fb2aed0e6395ef825f579771e5875271e9a8997f4cf5b9422a1"} Nov 28 17:05:41 crc kubenswrapper[5024]: I1128 17:05:41.706786 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-qpf2t" podStartSLOduration=3.311708001 podStartE2EDuration="5.706756965s" podCreationTimestamp="2025-11-28 17:05:36 +0000 UTC" firstStartedPulling="2025-11-28 17:05:37.769281221 +0000 UTC m=+439.818202126" lastFinishedPulling="2025-11-28 17:05:40.164330185 +0000 UTC m=+442.213251090" observedRunningTime="2025-11-28 17:05:41.692842971 +0000 UTC m=+443.741763876" watchObservedRunningTime="2025-11-28 17:05:41.706756965 +0000 UTC m=+443.755677870" Nov 28 17:05:41 crc kubenswrapper[5024]: I1128 17:05:41.838805 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-66c748cb5d-x6cf7"] Nov 28 17:05:41 crc kubenswrapper[5024]: I1128 17:05:41.842690 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:41 crc kubenswrapper[5024]: I1128 17:05:41.858593 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66c748cb5d-x6cf7"] Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.006012 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5twfh\" (UniqueName: \"kubernetes.io/projected/8468ec1f-9c45-41af-a290-ebdf83f0edf2-kube-api-access-5twfh\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.006248 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-oauth-serving-cert\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.006368 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-service-ca\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.006476 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-trusted-ca-bundle\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.006556 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-config\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.006611 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-oauth-config\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.006701 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-serving-cert\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.108291 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-trusted-ca-bundle\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.108360 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-config\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.108386 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-oauth-config\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.108416 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-serving-cert\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.108537 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5twfh\" (UniqueName: \"kubernetes.io/projected/8468ec1f-9c45-41af-a290-ebdf83f0edf2-kube-api-access-5twfh\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.108583 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-oauth-serving-cert\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.108612 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-service-ca\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.109657 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-trusted-ca-bundle\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.109981 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-config\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.109992 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-oauth-serving-cert\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.111220 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-service-ca\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.121661 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-oauth-config\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.122165 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-serving-cert\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.126574 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5twfh\" (UniqueName: \"kubernetes.io/projected/8468ec1f-9c45-41af-a290-ebdf83f0edf2-kube-api-access-5twfh\") pod \"console-66c748cb5d-x6cf7\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.168354 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.385839 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-8d7475878-mpc2r"] Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.387416 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.391284 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.391387 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-jr69l" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.391424 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.391601 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.391906 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-e8u5oefajvik5" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.391967 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.407664 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-8d7475878-mpc2r"] Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.517134 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea58cb44-e4b3-4b74-9588-3ce98fd877be-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.518123 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fk9c\" (UniqueName: \"kubernetes.io/projected/ea58cb44-e4b3-4b74-9588-3ce98fd877be-kube-api-access-7fk9c\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.518168 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ea58cb44-e4b3-4b74-9588-3ce98fd877be-secret-metrics-client-certs\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.518190 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ea58cb44-e4b3-4b74-9588-3ce98fd877be-audit-log\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.518232 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ea58cb44-e4b3-4b74-9588-3ce98fd877be-secret-metrics-server-tls\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.518271 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ea58cb44-e4b3-4b74-9588-3ce98fd877be-metrics-server-audit-profiles\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.518296 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea58cb44-e4b3-4b74-9588-3ce98fd877be-client-ca-bundle\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.620424 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ea58cb44-e4b3-4b74-9588-3ce98fd877be-secret-metrics-client-certs\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.620502 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ea58cb44-e4b3-4b74-9588-3ce98fd877be-audit-log\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.620536 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ea58cb44-e4b3-4b74-9588-3ce98fd877be-secret-metrics-server-tls\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.620586 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ea58cb44-e4b3-4b74-9588-3ce98fd877be-metrics-server-audit-profiles\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.620613 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea58cb44-e4b3-4b74-9588-3ce98fd877be-client-ca-bundle\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.620690 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea58cb44-e4b3-4b74-9588-3ce98fd877be-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.620762 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fk9c\" (UniqueName: \"kubernetes.io/projected/ea58cb44-e4b3-4b74-9588-3ce98fd877be-kube-api-access-7fk9c\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.621813 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ea58cb44-e4b3-4b74-9588-3ce98fd877be-audit-log\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.622334 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea58cb44-e4b3-4b74-9588-3ce98fd877be-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.622838 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ea58cb44-e4b3-4b74-9588-3ce98fd877be-metrics-server-audit-profiles\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.626781 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ea58cb44-e4b3-4b74-9588-3ce98fd877be-secret-metrics-client-certs\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.627363 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ea58cb44-e4b3-4b74-9588-3ce98fd877be-secret-metrics-server-tls\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.628563 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea58cb44-e4b3-4b74-9588-3ce98fd877be-client-ca-bundle\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.641358 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fk9c\" (UniqueName: \"kubernetes.io/projected/ea58cb44-e4b3-4b74-9588-3ce98fd877be-kube-api-access-7fk9c\") pod \"metrics-server-8d7475878-mpc2r\" (UID: \"ea58cb44-e4b3-4b74-9588-3ce98fd877be\") " pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.715193 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.731751 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zpq4x" event={"ID":"0b3e723c-fb54-400f-a61d-f3772e06753b","Type":"ContainerStarted","Data":"0ff06b39afa4a03916c5456ec6af6c971ae45b0453f587a77a1dc57439f3b27b"} Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.731797 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zpq4x" event={"ID":"0b3e723c-fb54-400f-a61d-f3772e06753b","Type":"ContainerStarted","Data":"d38cf01dd456b09b6222bf7372abcad781ce896d524fe4acbad731d70ed64a47"} Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.742114 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66c748cb5d-x6cf7"] Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.763949 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-zpq4x" podStartSLOduration=3.019887458 podStartE2EDuration="5.76392137s" podCreationTimestamp="2025-11-28 17:05:37 +0000 UTC" firstStartedPulling="2025-11-28 17:05:37.449849442 +0000 UTC m=+439.498770347" lastFinishedPulling="2025-11-28 17:05:40.193883354 +0000 UTC m=+442.242804259" observedRunningTime="2025-11-28 17:05:42.76363282 +0000 UTC m=+444.812553745" watchObservedRunningTime="2025-11-28 17:05:42.76392137 +0000 UTC m=+444.812842275" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.786253 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-556bf88c56-s8pwx"] Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.787271 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-556bf88c56-s8pwx" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.790705 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.790997 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.806637 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-556bf88c56-s8pwx"] Nov 28 17:05:42 crc kubenswrapper[5024]: I1128 17:05:42.949807 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/ff941f00-d92a-4f1a-91e9-04d21ceed7fc-monitoring-plugin-cert\") pod \"monitoring-plugin-556bf88c56-s8pwx\" (UID: \"ff941f00-d92a-4f1a-91e9-04d21ceed7fc\") " pod="openshift-monitoring/monitoring-plugin-556bf88c56-s8pwx" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.052288 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/ff941f00-d92a-4f1a-91e9-04d21ceed7fc-monitoring-plugin-cert\") pod \"monitoring-plugin-556bf88c56-s8pwx\" (UID: \"ff941f00-d92a-4f1a-91e9-04d21ceed7fc\") " pod="openshift-monitoring/monitoring-plugin-556bf88c56-s8pwx" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.062937 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/ff941f00-d92a-4f1a-91e9-04d21ceed7fc-monitoring-plugin-cert\") pod \"monitoring-plugin-556bf88c56-s8pwx\" (UID: \"ff941f00-d92a-4f1a-91e9-04d21ceed7fc\") " pod="openshift-monitoring/monitoring-plugin-556bf88c56-s8pwx" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.119009 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-556bf88c56-s8pwx" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.678043 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.680705 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.692265 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.692472 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-zdbd4" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.692539 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.692633 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-40i3eap1erq28" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.692757 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.692945 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.693004 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.696361 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.696430 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.696669 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.710634 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.712593 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.764832 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.764892 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.764917 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.764940 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.764974 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67f5d\" (UniqueName: \"kubernetes.io/projected/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-kube-api-access-67f5d\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.764998 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.765035 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.765063 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-config-out\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.765087 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.765124 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.765150 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.765176 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.765223 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.765249 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.765271 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-config\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.765296 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.765345 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-web-config\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.765371 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.769063 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.782448 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866486 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866547 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866571 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866597 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67f5d\" (UniqueName: \"kubernetes.io/projected/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-kube-api-access-67f5d\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866623 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866643 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866661 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-config-out\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866685 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866705 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866726 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866747 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866774 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866796 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866814 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-config\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866834 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866872 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-web-config\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866895 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.866925 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.868937 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.869979 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.872306 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.874731 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.875159 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.877702 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.878828 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.879158 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-config\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.880557 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.881772 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.893057 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67f5d\" (UniqueName: \"kubernetes.io/projected/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-kube-api-access-67f5d\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.973668 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.974037 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.974288 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-config-out\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.975438 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.975956 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.978992 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:43 crc kubenswrapper[5024]: I1128 17:05:43.980599 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9-web-config\") pod \"prometheus-k8s-0\" (UID: \"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:44 crc kubenswrapper[5024]: I1128 17:05:44.014711 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:45 crc kubenswrapper[5024]: W1128 17:05:45.079676 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8468ec1f_9c45_41af_a290_ebdf83f0edf2.slice/crio-7e77c77e04788c1fa4680a1d29497bb3c07406425c813765c741458c86d8bd35 WatchSource:0}: Error finding container 7e77c77e04788c1fa4680a1d29497bb3c07406425c813765c741458c86d8bd35: Status 404 returned error can't find the container with id 7e77c77e04788c1fa4680a1d29497bb3c07406425c813765c741458c86d8bd35 Nov 28 17:05:45 crc kubenswrapper[5024]: I1128 17:05:45.642798 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-8d7475878-mpc2r"] Nov 28 17:05:45 crc kubenswrapper[5024]: W1128 17:05:45.730297 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea58cb44_e4b3_4b74_9588_3ce98fd877be.slice/crio-d47585d8985faf816b4884915eadd541b3ff817b77db181585881057b1c805eb WatchSource:0}: Error finding container d47585d8985faf816b4884915eadd541b3ff817b77db181585881057b1c805eb: Status 404 returned error can't find the container with id d47585d8985faf816b4884915eadd541b3ff817b77db181585881057b1c805eb Nov 28 17:05:45 crc kubenswrapper[5024]: I1128 17:05:45.757421 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-556bf88c56-s8pwx"] Nov 28 17:05:45 crc kubenswrapper[5024]: W1128 17:05:45.765008 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff941f00_d92a_4f1a_91e9_04d21ceed7fc.slice/crio-b0ab0a243d4558c49b0015beb9591120dd381051f05e96b47de97d238c90ea4e WatchSource:0}: Error finding container b0ab0a243d4558c49b0015beb9591120dd381051f05e96b47de97d238c90ea4e: Status 404 returned error can't find the container with id b0ab0a243d4558c49b0015beb9591120dd381051f05e96b47de97d238c90ea4e Nov 28 17:05:45 crc kubenswrapper[5024]: I1128 17:05:45.786251 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" event={"ID":"affd52e4-2e9e-453b-bd05-b128058c012d","Type":"ContainerStarted","Data":"be8aee44b634ddf653e293e00d83becc9547d3e45d0ecb6a18d7de01d26efff3"} Nov 28 17:05:45 crc kubenswrapper[5024]: I1128 17:05:45.787726 5024 generic.go:334] "Generic (PLEG): container finished" podID="d057d20a-0080-46d1-9d04-adafbe6da44f" containerID="98007b82e441ac0a65d3328d579fdf27b4a64fc133058a839e8db9128830fd01" exitCode=0 Nov 28 17:05:45 crc kubenswrapper[5024]: I1128 17:05:45.787793 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d057d20a-0080-46d1-9d04-adafbe6da44f","Type":"ContainerDied","Data":"98007b82e441ac0a65d3328d579fdf27b4a64fc133058a839e8db9128830fd01"} Nov 28 17:05:45 crc kubenswrapper[5024]: I1128 17:05:45.788483 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 28 17:05:45 crc kubenswrapper[5024]: I1128 17:05:45.790516 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" event={"ID":"ea58cb44-e4b3-4b74-9588-3ce98fd877be","Type":"ContainerStarted","Data":"d47585d8985faf816b4884915eadd541b3ff817b77db181585881057b1c805eb"} Nov 28 17:05:45 crc kubenswrapper[5024]: I1128 17:05:45.792502 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-556bf88c56-s8pwx" event={"ID":"ff941f00-d92a-4f1a-91e9-04d21ceed7fc","Type":"ContainerStarted","Data":"b0ab0a243d4558c49b0015beb9591120dd381051f05e96b47de97d238c90ea4e"} Nov 28 17:05:45 crc kubenswrapper[5024]: I1128 17:05:45.796374 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66c748cb5d-x6cf7" event={"ID":"8468ec1f-9c45-41af-a290-ebdf83f0edf2","Type":"ContainerStarted","Data":"c0b8f95a8b7364a4833734cf6906deef441e88d9d1fd513006f8546991b196a9"} Nov 28 17:05:45 crc kubenswrapper[5024]: I1128 17:05:45.796402 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66c748cb5d-x6cf7" event={"ID":"8468ec1f-9c45-41af-a290-ebdf83f0edf2","Type":"ContainerStarted","Data":"7e77c77e04788c1fa4680a1d29497bb3c07406425c813765c741458c86d8bd35"} Nov 28 17:05:45 crc kubenswrapper[5024]: W1128 17:05:45.798409 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a8d3f09_df46_49ec_a965_ea6a8d1f2aa9.slice/crio-3a7ba15e8677a9f5c3137d24226f0143b62105ec7bcd9d24f4d192ffb1eb2b34 WatchSource:0}: Error finding container 3a7ba15e8677a9f5c3137d24226f0143b62105ec7bcd9d24f4d192ffb1eb2b34: Status 404 returned error can't find the container with id 3a7ba15e8677a9f5c3137d24226f0143b62105ec7bcd9d24f4d192ffb1eb2b34 Nov 28 17:05:46 crc kubenswrapper[5024]: I1128 17:05:46.814125 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" event={"ID":"affd52e4-2e9e-453b-bd05-b128058c012d","Type":"ContainerStarted","Data":"0ac2e2233dc192a6771c47060e22e2c289813e37c7a18ef02c5cb6ae483e4e40"} Nov 28 17:05:46 crc kubenswrapper[5024]: I1128 17:05:46.814524 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" event={"ID":"affd52e4-2e9e-453b-bd05-b128058c012d","Type":"ContainerStarted","Data":"8047c60e93a2d00f56f383d359feeaf30363e861dada3151dacacad1807284b1"} Nov 28 17:05:46 crc kubenswrapper[5024]: I1128 17:05:46.817048 5024 generic.go:334] "Generic (PLEG): container finished" podID="4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9" containerID="1542683081dc353fb09b575d570983ff5834f1043597ad84283044f2d68b5d24" exitCode=0 Nov 28 17:05:46 crc kubenswrapper[5024]: I1128 17:05:46.817146 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9","Type":"ContainerDied","Data":"1542683081dc353fb09b575d570983ff5834f1043597ad84283044f2d68b5d24"} Nov 28 17:05:46 crc kubenswrapper[5024]: I1128 17:05:46.817186 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9","Type":"ContainerStarted","Data":"3a7ba15e8677a9f5c3137d24226f0143b62105ec7bcd9d24f4d192ffb1eb2b34"} Nov 28 17:05:46 crc kubenswrapper[5024]: I1128 17:05:46.858487 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-66c748cb5d-x6cf7" podStartSLOduration=5.858461758 podStartE2EDuration="5.858461758s" podCreationTimestamp="2025-11-28 17:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:05:45.851516366 +0000 UTC m=+447.900437271" watchObservedRunningTime="2025-11-28 17:05:46.858461758 +0000 UTC m=+448.907382663" Nov 28 17:05:52 crc kubenswrapper[5024]: I1128 17:05:52.170157 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:52 crc kubenswrapper[5024]: I1128 17:05:52.170752 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:52 crc kubenswrapper[5024]: I1128 17:05:52.176852 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:52 crc kubenswrapper[5024]: I1128 17:05:52.871854 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d057d20a-0080-46d1-9d04-adafbe6da44f","Type":"ContainerStarted","Data":"412f29c3ccdef62504f7a8170d195589e27ede4e0a10d87846b0c4d65d4fe8b4"} Nov 28 17:05:52 crc kubenswrapper[5024]: I1128 17:05:52.873747 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" event={"ID":"ea58cb44-e4b3-4b74-9588-3ce98fd877be","Type":"ContainerStarted","Data":"a0b69c4e46b2dbcd6ea9a5b02cc06ad0d3c0a85b05732868fa25748d241efaf2"} Nov 28 17:05:52 crc kubenswrapper[5024]: I1128 17:05:52.874951 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-556bf88c56-s8pwx" event={"ID":"ff941f00-d92a-4f1a-91e9-04d21ceed7fc","Type":"ContainerStarted","Data":"31cbf88a5489cd2f5f7822065409763a982f08af860ac3e0ba5002ca70fb7430"} Nov 28 17:05:52 crc kubenswrapper[5024]: I1128 17:05:52.876395 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-556bf88c56-s8pwx" Nov 28 17:05:52 crc kubenswrapper[5024]: I1128 17:05:52.880311 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" event={"ID":"affd52e4-2e9e-453b-bd05-b128058c012d","Type":"ContainerStarted","Data":"ea2f59453bd14e1ccd862c6a65e05d27052c183f5ac2b2c3a328c06046936547"} Nov 28 17:05:52 crc kubenswrapper[5024]: I1128 17:05:52.883878 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-556bf88c56-s8pwx" Nov 28 17:05:52 crc kubenswrapper[5024]: I1128 17:05:52.884803 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:05:52 crc kubenswrapper[5024]: I1128 17:05:52.895294 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-556bf88c56-s8pwx" podStartSLOduration=4.448735565 podStartE2EDuration="10.895270871s" podCreationTimestamp="2025-11-28 17:05:42 +0000 UTC" firstStartedPulling="2025-11-28 17:05:45.768100035 +0000 UTC m=+447.817020930" lastFinishedPulling="2025-11-28 17:05:52.214635231 +0000 UTC m=+454.263556236" observedRunningTime="2025-11-28 17:05:52.894882827 +0000 UTC m=+454.943803732" watchObservedRunningTime="2025-11-28 17:05:52.895270871 +0000 UTC m=+454.944191776" Nov 28 17:05:52 crc kubenswrapper[5024]: I1128 17:05:52.971849 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-r7n7g"] Nov 28 17:05:53 crc kubenswrapper[5024]: I1128 17:05:53.890190 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" event={"ID":"affd52e4-2e9e-453b-bd05-b128058c012d","Type":"ContainerStarted","Data":"e8b7544d358532e4bcdc05d645bdca15977579c5c2ad2789d998c3c8f2bb7434"} Nov 28 17:05:53 crc kubenswrapper[5024]: I1128 17:05:53.892798 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d057d20a-0080-46d1-9d04-adafbe6da44f","Type":"ContainerStarted","Data":"9994a2e99cdd0868dc7dbbcda4cc639fbc0dbc270ad1eb4553a62226da0e4fa8"} Nov 28 17:05:53 crc kubenswrapper[5024]: I1128 17:05:53.924261 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" podStartSLOduration=5.449257099 podStartE2EDuration="11.924233194s" podCreationTimestamp="2025-11-28 17:05:42 +0000 UTC" firstStartedPulling="2025-11-28 17:05:45.733180506 +0000 UTC m=+447.782101411" lastFinishedPulling="2025-11-28 17:05:52.208156601 +0000 UTC m=+454.257077506" observedRunningTime="2025-11-28 17:05:53.920711079 +0000 UTC m=+455.969631984" watchObservedRunningTime="2025-11-28 17:05:53.924233194 +0000 UTC m=+455.973154099" Nov 28 17:05:56 crc kubenswrapper[5024]: I1128 17:05:56.930310 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9","Type":"ContainerStarted","Data":"817dd0631d0e436e46b06039718538ca22392bab8bb28aabe95c5b7392b2dfa5"} Nov 28 17:05:56 crc kubenswrapper[5024]: I1128 17:05:56.931266 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9","Type":"ContainerStarted","Data":"44a0f7634b790bc2e554449d03e6aecd90d1b357544c0a8192adec2f48a0033d"} Nov 28 17:05:56 crc kubenswrapper[5024]: I1128 17:05:56.931287 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9","Type":"ContainerStarted","Data":"55d857e3e1d33850beda0c8c921d602ec1d005154c1eda4f2f3cbcf2ae441a92"} Nov 28 17:05:56 crc kubenswrapper[5024]: I1128 17:05:56.931303 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9","Type":"ContainerStarted","Data":"51160da1a1e322ae97fddae8d1ab3014c4a98fe336fd5d8d4441c502e622e35d"} Nov 28 17:05:56 crc kubenswrapper[5024]: I1128 17:05:56.931314 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9","Type":"ContainerStarted","Data":"871eeb88901786ce2c25edff737fd006b84547003e999a5077a2716b2aba4d2d"} Nov 28 17:05:56 crc kubenswrapper[5024]: I1128 17:05:56.931327 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4a8d3f09-df46-49ec-a965-ea6a8d1f2aa9","Type":"ContainerStarted","Data":"af1dfbd6f7d53650ea6adfe0321528c2764f832f947cbcc3ecd184fc7762aff5"} Nov 28 17:05:56 crc kubenswrapper[5024]: I1128 17:05:56.936398 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" event={"ID":"affd52e4-2e9e-453b-bd05-b128058c012d","Type":"ContainerStarted","Data":"ed7e9985ec5556b4382d69dbe623a8702e47e6b7a28b6b68ca1bdf31e831156a"} Nov 28 17:05:56 crc kubenswrapper[5024]: I1128 17:05:56.936634 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:56 crc kubenswrapper[5024]: I1128 17:05:56.942482 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d057d20a-0080-46d1-9d04-adafbe6da44f","Type":"ContainerStarted","Data":"b4ee51b240653bf57b35ae3b997eb5d6b1856c5d163ea40f7dd35dc838e934fc"} Nov 28 17:05:56 crc kubenswrapper[5024]: I1128 17:05:56.942538 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d057d20a-0080-46d1-9d04-adafbe6da44f","Type":"ContainerStarted","Data":"bef8dfe4d5d10912f6420379f1d615116ead34cdd936c3e433cbb86f1621d907"} Nov 28 17:05:56 crc kubenswrapper[5024]: I1128 17:05:56.942552 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d057d20a-0080-46d1-9d04-adafbe6da44f","Type":"ContainerStarted","Data":"2eab9581d65eaaeca66fbcf8871200b46e288fa5e3d8f825f4644275d09c1fee"} Nov 28 17:05:56 crc kubenswrapper[5024]: I1128 17:05:56.942563 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"d057d20a-0080-46d1-9d04-adafbe6da44f","Type":"ContainerStarted","Data":"1dc0d910e715ddc11323b10d9f22ae5cd5da0357839eccffca909b5754ce98b3"} Nov 28 17:05:56 crc kubenswrapper[5024]: I1128 17:05:56.948145 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" Nov 28 17:05:56 crc kubenswrapper[5024]: I1128 17:05:56.974558 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.063484693 podStartE2EDuration="13.974532327s" podCreationTimestamp="2025-11-28 17:05:43 +0000 UTC" firstStartedPulling="2025-11-28 17:05:46.819092491 +0000 UTC m=+448.868013396" lastFinishedPulling="2025-11-28 17:05:55.730140125 +0000 UTC m=+457.779061030" observedRunningTime="2025-11-28 17:05:56.966397096 +0000 UTC m=+459.015318011" watchObservedRunningTime="2025-11-28 17:05:56.974532327 +0000 UTC m=+459.023453222" Nov 28 17:05:57 crc kubenswrapper[5024]: I1128 17:05:57.008508 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=7.45077801 podStartE2EDuration="19.00847331s" podCreationTimestamp="2025-11-28 17:05:38 +0000 UTC" firstStartedPulling="2025-11-28 17:05:40.649687223 +0000 UTC m=+442.698608118" lastFinishedPulling="2025-11-28 17:05:52.207382513 +0000 UTC m=+454.256303418" observedRunningTime="2025-11-28 17:05:57.000450352 +0000 UTC m=+459.049371277" watchObservedRunningTime="2025-11-28 17:05:57.00847331 +0000 UTC m=+459.057394235" Nov 28 17:05:57 crc kubenswrapper[5024]: I1128 17:05:57.046813 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-9f858bf7b-mltgm" podStartSLOduration=6.555724411 podStartE2EDuration="18.046785476s" podCreationTimestamp="2025-11-28 17:05:39 +0000 UTC" firstStartedPulling="2025-11-28 17:05:40.716192054 +0000 UTC m=+442.765112959" lastFinishedPulling="2025-11-28 17:05:52.207253119 +0000 UTC m=+454.256174024" observedRunningTime="2025-11-28 17:05:57.043602686 +0000 UTC m=+459.092523591" watchObservedRunningTime="2025-11-28 17:05:57.046785476 +0000 UTC m=+459.095706381" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.193579 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" podUID="bd1a93a9-0f58-4e15-90ec-2fb56e8f4931" containerName="registry" containerID="cri-o://93d5395ae0a021e47f82b74f0c3b62f9e3ea6ddc08a8fce0d936a17c591fbcc1" gracePeriod=30 Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.654713 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.717901 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-registry-tls\") pod \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.717963 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rwp6\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-kube-api-access-7rwp6\") pod \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.718010 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-ca-trust-extracted\") pod \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.718046 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-bound-sa-token\") pod \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.718081 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-trusted-ca\") pod \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.718108 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-registry-certificates\") pod \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.718127 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-installation-pull-secrets\") pod \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.718311 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\" (UID: \"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931\") " Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.719494 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.719555 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.731750 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.733296 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.735413 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-kube-api-access-7rwp6" (OuterVolumeSpecName: "kube-api-access-7rwp6") pod "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931"). InnerVolumeSpecName "kube-api-access-7rwp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.736554 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.737400 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.739891 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931" (UID: "bd1a93a9-0f58-4e15-90ec-2fb56e8f4931"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.819610 5024 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.819661 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rwp6\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-kube-api-access-7rwp6\") on node \"crc\" DevicePath \"\"" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.819675 5024 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.819685 5024 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.819693 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.819701 5024 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.819713 5024 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.958397 5024 generic.go:334] "Generic (PLEG): container finished" podID="bd1a93a9-0f58-4e15-90ec-2fb56e8f4931" containerID="93d5395ae0a021e47f82b74f0c3b62f9e3ea6ddc08a8fce0d936a17c591fbcc1" exitCode=0 Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.958514 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" event={"ID":"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931","Type":"ContainerDied","Data":"93d5395ae0a021e47f82b74f0c3b62f9e3ea6ddc08a8fce0d936a17c591fbcc1"} Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.958656 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" event={"ID":"bd1a93a9-0f58-4e15-90ec-2fb56e8f4931","Type":"ContainerDied","Data":"21268923ae5624dfbd4279b8a8cf2458b2301fafd025cc1aec18153eeecc507c"} Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.958552 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n4vqb" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.958687 5024 scope.go:117] "RemoveContainer" containerID="93d5395ae0a021e47f82b74f0c3b62f9e3ea6ddc08a8fce0d936a17c591fbcc1" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.997923 5024 scope.go:117] "RemoveContainer" containerID="93d5395ae0a021e47f82b74f0c3b62f9e3ea6ddc08a8fce0d936a17c591fbcc1" Nov 28 17:05:58 crc kubenswrapper[5024]: E1128 17:05:58.998611 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d5395ae0a021e47f82b74f0c3b62f9e3ea6ddc08a8fce0d936a17c591fbcc1\": container with ID starting with 93d5395ae0a021e47f82b74f0c3b62f9e3ea6ddc08a8fce0d936a17c591fbcc1 not found: ID does not exist" containerID="93d5395ae0a021e47f82b74f0c3b62f9e3ea6ddc08a8fce0d936a17c591fbcc1" Nov 28 17:05:58 crc kubenswrapper[5024]: I1128 17:05:58.998741 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d5395ae0a021e47f82b74f0c3b62f9e3ea6ddc08a8fce0d936a17c591fbcc1"} err="failed to get container status \"93d5395ae0a021e47f82b74f0c3b62f9e3ea6ddc08a8fce0d936a17c591fbcc1\": rpc error: code = NotFound desc = could not find container \"93d5395ae0a021e47f82b74f0c3b62f9e3ea6ddc08a8fce0d936a17c591fbcc1\": container with ID starting with 93d5395ae0a021e47f82b74f0c3b62f9e3ea6ddc08a8fce0d936a17c591fbcc1 not found: ID does not exist" Nov 28 17:05:59 crc kubenswrapper[5024]: I1128 17:05:59.013632 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n4vqb"] Nov 28 17:05:59 crc kubenswrapper[5024]: I1128 17:05:59.014923 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:05:59 crc kubenswrapper[5024]: I1128 17:05:59.019272 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n4vqb"] Nov 28 17:06:00 crc kubenswrapper[5024]: I1128 17:06:00.509422 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd1a93a9-0f58-4e15-90ec-2fb56e8f4931" path="/var/lib/kubelet/pods/bd1a93a9-0f58-4e15-90ec-2fb56e8f4931/volumes" Nov 28 17:06:02 crc kubenswrapper[5024]: I1128 17:06:02.716188 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:06:02 crc kubenswrapper[5024]: I1128 17:06:02.716760 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:06:18 crc kubenswrapper[5024]: I1128 17:06:18.025167 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-r7n7g" podUID="f84f4343-2000-4b50-9650-22953ca7d39d" containerName="console" containerID="cri-o://aac6675b09e1b4304dbe8a88e039d6ac71a2dfcb278e02f73847b3eb433f567b" gracePeriod=15 Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.127855 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-r7n7g_f84f4343-2000-4b50-9650-22953ca7d39d/console/0.log" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.128506 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.129603 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-r7n7g_f84f4343-2000-4b50-9650-22953ca7d39d/console/0.log" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.129642 5024 generic.go:334] "Generic (PLEG): container finished" podID="f84f4343-2000-4b50-9650-22953ca7d39d" containerID="aac6675b09e1b4304dbe8a88e039d6ac71a2dfcb278e02f73847b3eb433f567b" exitCode=2 Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.129684 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r7n7g" event={"ID":"f84f4343-2000-4b50-9650-22953ca7d39d","Type":"ContainerDied","Data":"aac6675b09e1b4304dbe8a88e039d6ac71a2dfcb278e02f73847b3eb433f567b"} Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.129715 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r7n7g" event={"ID":"f84f4343-2000-4b50-9650-22953ca7d39d","Type":"ContainerDied","Data":"13c5d1c28c1b581cee4ad83a822bc148d031c8d47edb71640e191476415de622"} Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.129734 5024 scope.go:117] "RemoveContainer" containerID="aac6675b09e1b4304dbe8a88e039d6ac71a2dfcb278e02f73847b3eb433f567b" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.131953 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-console-config\") pod \"f84f4343-2000-4b50-9650-22953ca7d39d\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.131994 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f84f4343-2000-4b50-9650-22953ca7d39d-console-oauth-config\") pod \"f84f4343-2000-4b50-9650-22953ca7d39d\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.132027 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-service-ca\") pod \"f84f4343-2000-4b50-9650-22953ca7d39d\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.132147 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-548v7\" (UniqueName: \"kubernetes.io/projected/f84f4343-2000-4b50-9650-22953ca7d39d-kube-api-access-548v7\") pod \"f84f4343-2000-4b50-9650-22953ca7d39d\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.133093 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-console-config" (OuterVolumeSpecName: "console-config") pod "f84f4343-2000-4b50-9650-22953ca7d39d" (UID: "f84f4343-2000-4b50-9650-22953ca7d39d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.133110 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-service-ca" (OuterVolumeSpecName: "service-ca") pod "f84f4343-2000-4b50-9650-22953ca7d39d" (UID: "f84f4343-2000-4b50-9650-22953ca7d39d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.139093 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84f4343-2000-4b50-9650-22953ca7d39d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f84f4343-2000-4b50-9650-22953ca7d39d" (UID: "f84f4343-2000-4b50-9650-22953ca7d39d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.142318 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f84f4343-2000-4b50-9650-22953ca7d39d-kube-api-access-548v7" (OuterVolumeSpecName: "kube-api-access-548v7") pod "f84f4343-2000-4b50-9650-22953ca7d39d" (UID: "f84f4343-2000-4b50-9650-22953ca7d39d"). InnerVolumeSpecName "kube-api-access-548v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.163739 5024 scope.go:117] "RemoveContainer" containerID="aac6675b09e1b4304dbe8a88e039d6ac71a2dfcb278e02f73847b3eb433f567b" Nov 28 17:06:22 crc kubenswrapper[5024]: E1128 17:06:22.168303 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aac6675b09e1b4304dbe8a88e039d6ac71a2dfcb278e02f73847b3eb433f567b\": container with ID starting with aac6675b09e1b4304dbe8a88e039d6ac71a2dfcb278e02f73847b3eb433f567b not found: ID does not exist" containerID="aac6675b09e1b4304dbe8a88e039d6ac71a2dfcb278e02f73847b3eb433f567b" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.168344 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac6675b09e1b4304dbe8a88e039d6ac71a2dfcb278e02f73847b3eb433f567b"} err="failed to get container status \"aac6675b09e1b4304dbe8a88e039d6ac71a2dfcb278e02f73847b3eb433f567b\": rpc error: code = NotFound desc = could not find container \"aac6675b09e1b4304dbe8a88e039d6ac71a2dfcb278e02f73847b3eb433f567b\": container with ID starting with aac6675b09e1b4304dbe8a88e039d6ac71a2dfcb278e02f73847b3eb433f567b not found: ID does not exist" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.233897 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-oauth-serving-cert\") pod \"f84f4343-2000-4b50-9650-22953ca7d39d\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.233946 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-trusted-ca-bundle\") pod \"f84f4343-2000-4b50-9650-22953ca7d39d\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.233970 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f84f4343-2000-4b50-9650-22953ca7d39d-console-serving-cert\") pod \"f84f4343-2000-4b50-9650-22953ca7d39d\" (UID: \"f84f4343-2000-4b50-9650-22953ca7d39d\") " Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.234382 5024 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.234422 5024 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f84f4343-2000-4b50-9650-22953ca7d39d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.234435 5024 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.234444 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-548v7\" (UniqueName: \"kubernetes.io/projected/f84f4343-2000-4b50-9650-22953ca7d39d-kube-api-access-548v7\") on node \"crc\" DevicePath \"\"" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.234469 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f84f4343-2000-4b50-9650-22953ca7d39d" (UID: "f84f4343-2000-4b50-9650-22953ca7d39d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.234484 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f84f4343-2000-4b50-9650-22953ca7d39d" (UID: "f84f4343-2000-4b50-9650-22953ca7d39d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.240452 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84f4343-2000-4b50-9650-22953ca7d39d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f84f4343-2000-4b50-9650-22953ca7d39d" (UID: "f84f4343-2000-4b50-9650-22953ca7d39d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.336244 5024 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.336293 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f84f4343-2000-4b50-9650-22953ca7d39d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.336303 5024 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f84f4343-2000-4b50-9650-22953ca7d39d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.723134 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:06:22 crc kubenswrapper[5024]: I1128 17:06:22.728698 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-8d7475878-mpc2r" Nov 28 17:06:23 crc kubenswrapper[5024]: I1128 17:06:23.139745 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r7n7g" Nov 28 17:06:23 crc kubenswrapper[5024]: I1128 17:06:23.163702 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-r7n7g"] Nov 28 17:06:23 crc kubenswrapper[5024]: I1128 17:06:23.167608 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-r7n7g"] Nov 28 17:06:24 crc kubenswrapper[5024]: I1128 17:06:24.506944 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f84f4343-2000-4b50-9650-22953ca7d39d" path="/var/lib/kubelet/pods/f84f4343-2000-4b50-9650-22953ca7d39d/volumes" Nov 28 17:06:44 crc kubenswrapper[5024]: I1128 17:06:44.015653 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:06:44 crc kubenswrapper[5024]: I1128 17:06:44.049396 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:06:44 crc kubenswrapper[5024]: I1128 17:06:44.412122 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.525732 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-684b966679-864s4"] Nov 28 17:07:00 crc kubenswrapper[5024]: E1128 17:07:00.526655 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1a93a9-0f58-4e15-90ec-2fb56e8f4931" containerName="registry" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.526673 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1a93a9-0f58-4e15-90ec-2fb56e8f4931" containerName="registry" Nov 28 17:07:00 crc kubenswrapper[5024]: E1128 17:07:00.526708 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f84f4343-2000-4b50-9650-22953ca7d39d" containerName="console" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.526716 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f84f4343-2000-4b50-9650-22953ca7d39d" containerName="console" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.526932 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f84f4343-2000-4b50-9650-22953ca7d39d" containerName="console" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.526950 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd1a93a9-0f58-4e15-90ec-2fb56e8f4931" containerName="registry" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.527599 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.543033 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-684b966679-864s4"] Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.686505 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-oauth-config\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.686775 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-config\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.686922 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-serving-cert\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.687011 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8rsk\" (UniqueName: \"kubernetes.io/projected/b95c512b-6c39-4c81-b89f-c76cfd89a185-kube-api-access-x8rsk\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.687127 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-service-ca\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.687294 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-trusted-ca-bundle\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.687370 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-oauth-serving-cert\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.788727 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-oauth-config\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.788825 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-config\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.788847 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-serving-cert\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.788867 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8rsk\" (UniqueName: \"kubernetes.io/projected/b95c512b-6c39-4c81-b89f-c76cfd89a185-kube-api-access-x8rsk\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.788886 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-service-ca\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.788921 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-trusted-ca-bundle\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.788941 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-oauth-serving-cert\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.789906 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-oauth-serving-cert\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.790370 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-service-ca\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.790950 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-trusted-ca-bundle\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.791095 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-config\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.798845 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-oauth-config\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.798860 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-serving-cert\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.815072 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8rsk\" (UniqueName: \"kubernetes.io/projected/b95c512b-6c39-4c81-b89f-c76cfd89a185-kube-api-access-x8rsk\") pod \"console-684b966679-864s4\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:00 crc kubenswrapper[5024]: I1128 17:07:00.848706 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:01 crc kubenswrapper[5024]: I1128 17:07:01.081584 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-684b966679-864s4"] Nov 28 17:07:01 crc kubenswrapper[5024]: I1128 17:07:01.494619 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-684b966679-864s4" event={"ID":"b95c512b-6c39-4c81-b89f-c76cfd89a185","Type":"ContainerStarted","Data":"9a0a5685d44563799666812ec21596c18f5de3e131987b32aaf09ecd08e632d3"} Nov 28 17:07:02 crc kubenswrapper[5024]: I1128 17:07:02.506013 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-684b966679-864s4" event={"ID":"b95c512b-6c39-4c81-b89f-c76cfd89a185","Type":"ContainerStarted","Data":"edaf95e01854863f1cfaed6ba7c1d08edec9bec805c4b7501e8663e0c68337c4"} Nov 28 17:07:02 crc kubenswrapper[5024]: I1128 17:07:02.529299 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-684b966679-864s4" podStartSLOduration=2.529275357 podStartE2EDuration="2.529275357s" podCreationTimestamp="2025-11-28 17:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:07:02.520533847 +0000 UTC m=+524.569454752" watchObservedRunningTime="2025-11-28 17:07:02.529275357 +0000 UTC m=+524.578196262" Nov 28 17:07:10 crc kubenswrapper[5024]: I1128 17:07:10.849345 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:10 crc kubenswrapper[5024]: I1128 17:07:10.850207 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:10 crc kubenswrapper[5024]: I1128 17:07:10.855039 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:11 crc kubenswrapper[5024]: I1128 17:07:11.580579 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-684b966679-864s4" Nov 28 17:07:11 crc kubenswrapper[5024]: I1128 17:07:11.655867 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-66c748cb5d-x6cf7"] Nov 28 17:07:36 crc kubenswrapper[5024]: I1128 17:07:36.714551 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-66c748cb5d-x6cf7" podUID="8468ec1f-9c45-41af-a290-ebdf83f0edf2" containerName="console" containerID="cri-o://c0b8f95a8b7364a4833734cf6906deef441e88d9d1fd513006f8546991b196a9" gracePeriod=15 Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.111901 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66c748cb5d-x6cf7_8468ec1f-9c45-41af-a290-ebdf83f0edf2/console/0.log" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.112250 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.250560 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-oauth-config\") pod \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.250651 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-oauth-serving-cert\") pod \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.250731 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-service-ca\") pod \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.250783 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-config\") pod \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.250847 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-serving-cert\") pod \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.250880 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5twfh\" (UniqueName: \"kubernetes.io/projected/8468ec1f-9c45-41af-a290-ebdf83f0edf2-kube-api-access-5twfh\") pod \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.250946 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-trusted-ca-bundle\") pod \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\" (UID: \"8468ec1f-9c45-41af-a290-ebdf83f0edf2\") " Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.251905 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-config" (OuterVolumeSpecName: "console-config") pod "8468ec1f-9c45-41af-a290-ebdf83f0edf2" (UID: "8468ec1f-9c45-41af-a290-ebdf83f0edf2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.251937 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8468ec1f-9c45-41af-a290-ebdf83f0edf2" (UID: "8468ec1f-9c45-41af-a290-ebdf83f0edf2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.251933 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8468ec1f-9c45-41af-a290-ebdf83f0edf2" (UID: "8468ec1f-9c45-41af-a290-ebdf83f0edf2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.252425 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-service-ca" (OuterVolumeSpecName: "service-ca") pod "8468ec1f-9c45-41af-a290-ebdf83f0edf2" (UID: "8468ec1f-9c45-41af-a290-ebdf83f0edf2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.257185 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8468ec1f-9c45-41af-a290-ebdf83f0edf2" (UID: "8468ec1f-9c45-41af-a290-ebdf83f0edf2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.257233 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8468ec1f-9c45-41af-a290-ebdf83f0edf2-kube-api-access-5twfh" (OuterVolumeSpecName: "kube-api-access-5twfh") pod "8468ec1f-9c45-41af-a290-ebdf83f0edf2" (UID: "8468ec1f-9c45-41af-a290-ebdf83f0edf2"). InnerVolumeSpecName "kube-api-access-5twfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.258180 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8468ec1f-9c45-41af-a290-ebdf83f0edf2" (UID: "8468ec1f-9c45-41af-a290-ebdf83f0edf2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.352476 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.352510 5024 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.352527 5024 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.352539 5024 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.352549 5024 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.352558 5024 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8468ec1f-9c45-41af-a290-ebdf83f0edf2-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.352567 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5twfh\" (UniqueName: \"kubernetes.io/projected/8468ec1f-9c45-41af-a290-ebdf83f0edf2-kube-api-access-5twfh\") on node \"crc\" DevicePath \"\"" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.809876 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-66c748cb5d-x6cf7_8468ec1f-9c45-41af-a290-ebdf83f0edf2/console/0.log" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.809937 5024 generic.go:334] "Generic (PLEG): container finished" podID="8468ec1f-9c45-41af-a290-ebdf83f0edf2" containerID="c0b8f95a8b7364a4833734cf6906deef441e88d9d1fd513006f8546991b196a9" exitCode=2 Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.809974 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66c748cb5d-x6cf7" event={"ID":"8468ec1f-9c45-41af-a290-ebdf83f0edf2","Type":"ContainerDied","Data":"c0b8f95a8b7364a4833734cf6906deef441e88d9d1fd513006f8546991b196a9"} Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.810008 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66c748cb5d-x6cf7" event={"ID":"8468ec1f-9c45-41af-a290-ebdf83f0edf2","Type":"ContainerDied","Data":"7e77c77e04788c1fa4680a1d29497bb3c07406425c813765c741458c86d8bd35"} Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.810044 5024 scope.go:117] "RemoveContainer" containerID="c0b8f95a8b7364a4833734cf6906deef441e88d9d1fd513006f8546991b196a9" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.810087 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66c748cb5d-x6cf7" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.828874 5024 scope.go:117] "RemoveContainer" containerID="c0b8f95a8b7364a4833734cf6906deef441e88d9d1fd513006f8546991b196a9" Nov 28 17:07:37 crc kubenswrapper[5024]: E1128 17:07:37.829344 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0b8f95a8b7364a4833734cf6906deef441e88d9d1fd513006f8546991b196a9\": container with ID starting with c0b8f95a8b7364a4833734cf6906deef441e88d9d1fd513006f8546991b196a9 not found: ID does not exist" containerID="c0b8f95a8b7364a4833734cf6906deef441e88d9d1fd513006f8546991b196a9" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.829379 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0b8f95a8b7364a4833734cf6906deef441e88d9d1fd513006f8546991b196a9"} err="failed to get container status \"c0b8f95a8b7364a4833734cf6906deef441e88d9d1fd513006f8546991b196a9\": rpc error: code = NotFound desc = could not find container \"c0b8f95a8b7364a4833734cf6906deef441e88d9d1fd513006f8546991b196a9\": container with ID starting with c0b8f95a8b7364a4833734cf6906deef441e88d9d1fd513006f8546991b196a9 not found: ID does not exist" Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.842922 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-66c748cb5d-x6cf7"] Nov 28 17:07:37 crc kubenswrapper[5024]: I1128 17:07:37.846792 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-66c748cb5d-x6cf7"] Nov 28 17:07:38 crc kubenswrapper[5024]: I1128 17:07:38.506481 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8468ec1f-9c45-41af-a290-ebdf83f0edf2" path="/var/lib/kubelet/pods/8468ec1f-9c45-41af-a290-ebdf83f0edf2/volumes" Nov 28 17:08:07 crc kubenswrapper[5024]: I1128 17:08:07.565677 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:08:07 crc kubenswrapper[5024]: I1128 17:08:07.566511 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:08:37 crc kubenswrapper[5024]: I1128 17:08:37.565485 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:08:37 crc kubenswrapper[5024]: I1128 17:08:37.566588 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:09:07 crc kubenswrapper[5024]: I1128 17:09:07.565713 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:09:07 crc kubenswrapper[5024]: I1128 17:09:07.566592 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:09:07 crc kubenswrapper[5024]: I1128 17:09:07.566674 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 17:09:07 crc kubenswrapper[5024]: I1128 17:09:07.567721 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2b8407cc3bf17902050626002a98c22963b96352f4dad4e0be00a881d87b638"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:09:07 crc kubenswrapper[5024]: I1128 17:09:07.567835 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://b2b8407cc3bf17902050626002a98c22963b96352f4dad4e0be00a881d87b638" gracePeriod=600 Nov 28 17:09:08 crc kubenswrapper[5024]: I1128 17:09:08.467177 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="b2b8407cc3bf17902050626002a98c22963b96352f4dad4e0be00a881d87b638" exitCode=0 Nov 28 17:09:08 crc kubenswrapper[5024]: I1128 17:09:08.467292 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"b2b8407cc3bf17902050626002a98c22963b96352f4dad4e0be00a881d87b638"} Nov 28 17:09:08 crc kubenswrapper[5024]: I1128 17:09:08.468176 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"b519f9b78edbf9b228fc85037669f9ab174eddbe4b594ce06b779c1bf0c5cf3c"} Nov 28 17:09:08 crc kubenswrapper[5024]: I1128 17:09:08.468212 5024 scope.go:117] "RemoveContainer" containerID="a5cfa405463e6da44c10e5aaed39d084534cafde9adb70808f0b8a54ca8b0cfc" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.177139 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx"] Nov 28 17:11:00 crc kubenswrapper[5024]: E1128 17:11:00.178145 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8468ec1f-9c45-41af-a290-ebdf83f0edf2" containerName="console" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.178161 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8468ec1f-9c45-41af-a290-ebdf83f0edf2" containerName="console" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.178267 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8468ec1f-9c45-41af-a290-ebdf83f0edf2" containerName="console" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.179332 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.182938 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.203719 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx"] Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.226515 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx\" (UID: \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.226595 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9x8h\" (UniqueName: \"kubernetes.io/projected/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-kube-api-access-n9x8h\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx\" (UID: \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.226657 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx\" (UID: \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.328092 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx\" (UID: \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.328540 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx\" (UID: \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.329192 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9x8h\" (UniqueName: \"kubernetes.io/projected/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-kube-api-access-n9x8h\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx\" (UID: \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.329115 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx\" (UID: \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.328850 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx\" (UID: \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.351069 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9x8h\" (UniqueName: \"kubernetes.io/projected/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-kube-api-access-n9x8h\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx\" (UID: \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.506800 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" Nov 28 17:11:00 crc kubenswrapper[5024]: I1128 17:11:00.761047 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx"] Nov 28 17:11:01 crc kubenswrapper[5024]: I1128 17:11:01.403634 5024 generic.go:334] "Generic (PLEG): container finished" podID="d8b87fe5-2e8a-4f1c-9ca4-4732b192d121" containerID="3e122e4775cc9230fd7164b72592159bacd60a5edc8e9c77b1b7ce4b166c496a" exitCode=0 Nov 28 17:11:01 crc kubenswrapper[5024]: I1128 17:11:01.403694 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" event={"ID":"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121","Type":"ContainerDied","Data":"3e122e4775cc9230fd7164b72592159bacd60a5edc8e9c77b1b7ce4b166c496a"} Nov 28 17:11:01 crc kubenswrapper[5024]: I1128 17:11:01.403733 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" event={"ID":"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121","Type":"ContainerStarted","Data":"a2f64e878d5509da31ea51dc332c951b7614fbfeb099b973c5dd7543a85dca37"} Nov 28 17:11:01 crc kubenswrapper[5024]: I1128 17:11:01.406403 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:11:03 crc kubenswrapper[5024]: I1128 17:11:03.422472 5024 generic.go:334] "Generic (PLEG): container finished" podID="d8b87fe5-2e8a-4f1c-9ca4-4732b192d121" containerID="8d65b3da2fe0449bfb25884ebcbdf2edfd6293dea2ac9f57027414cbecd8e84c" exitCode=0 Nov 28 17:11:03 crc kubenswrapper[5024]: I1128 17:11:03.422557 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" event={"ID":"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121","Type":"ContainerDied","Data":"8d65b3da2fe0449bfb25884ebcbdf2edfd6293dea2ac9f57027414cbecd8e84c"} Nov 28 17:11:04 crc kubenswrapper[5024]: I1128 17:11:04.432969 5024 generic.go:334] "Generic (PLEG): container finished" podID="d8b87fe5-2e8a-4f1c-9ca4-4732b192d121" containerID="148cc6caaf9fc92fc2947bb9defc56d47645b2802ebeff484e6493f341cb23a0" exitCode=0 Nov 28 17:11:04 crc kubenswrapper[5024]: I1128 17:11:04.433055 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" event={"ID":"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121","Type":"ContainerDied","Data":"148cc6caaf9fc92fc2947bb9defc56d47645b2802ebeff484e6493f341cb23a0"} Nov 28 17:11:05 crc kubenswrapper[5024]: I1128 17:11:05.721727 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" Nov 28 17:11:05 crc kubenswrapper[5024]: I1128 17:11:05.920482 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-util\") pod \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\" (UID: \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\") " Nov 28 17:11:05 crc kubenswrapper[5024]: I1128 17:11:05.920627 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-bundle\") pod \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\" (UID: \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\") " Nov 28 17:11:05 crc kubenswrapper[5024]: I1128 17:11:05.920662 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9x8h\" (UniqueName: \"kubernetes.io/projected/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-kube-api-access-n9x8h\") pod \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\" (UID: \"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121\") " Nov 28 17:11:05 crc kubenswrapper[5024]: I1128 17:11:05.923680 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-bundle" (OuterVolumeSpecName: "bundle") pod "d8b87fe5-2e8a-4f1c-9ca4-4732b192d121" (UID: "d8b87fe5-2e8a-4f1c-9ca4-4732b192d121"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:11:05 crc kubenswrapper[5024]: I1128 17:11:05.927948 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-kube-api-access-n9x8h" (OuterVolumeSpecName: "kube-api-access-n9x8h") pod "d8b87fe5-2e8a-4f1c-9ca4-4732b192d121" (UID: "d8b87fe5-2e8a-4f1c-9ca4-4732b192d121"). InnerVolumeSpecName "kube-api-access-n9x8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:11:05 crc kubenswrapper[5024]: I1128 17:11:05.936579 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-util" (OuterVolumeSpecName: "util") pod "d8b87fe5-2e8a-4f1c-9ca4-4732b192d121" (UID: "d8b87fe5-2e8a-4f1c-9ca4-4732b192d121"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:11:06 crc kubenswrapper[5024]: I1128 17:11:06.023133 5024 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-util\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:06 crc kubenswrapper[5024]: I1128 17:11:06.023191 5024 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:06 crc kubenswrapper[5024]: I1128 17:11:06.023203 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9x8h\" (UniqueName: \"kubernetes.io/projected/d8b87fe5-2e8a-4f1c-9ca4-4732b192d121-kube-api-access-n9x8h\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:06 crc kubenswrapper[5024]: I1128 17:11:06.463722 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" event={"ID":"d8b87fe5-2e8a-4f1c-9ca4-4732b192d121","Type":"ContainerDied","Data":"a2f64e878d5509da31ea51dc332c951b7614fbfeb099b973c5dd7543a85dca37"} Nov 28 17:11:06 crc kubenswrapper[5024]: I1128 17:11:06.463786 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2f64e878d5509da31ea51dc332c951b7614fbfeb099b973c5dd7543a85dca37" Nov 28 17:11:06 crc kubenswrapper[5024]: I1128 17:11:06.463843 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx" Nov 28 17:11:07 crc kubenswrapper[5024]: I1128 17:11:07.564680 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:11:07 crc kubenswrapper[5024]: I1128 17:11:07.565513 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.227581 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b2gbm"] Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.228598 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovn-controller" containerID="cri-o://eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a" gracePeriod=30 Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.228693 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="nbdb" containerID="cri-o://5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1" gracePeriod=30 Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.228744 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654" gracePeriod=30 Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.228793 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovn-acl-logging" containerID="cri-o://778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8" gracePeriod=30 Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.228776 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="northd" containerID="cri-o://649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10" gracePeriod=30 Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.228834 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="kube-rbac-proxy-node" containerID="cri-o://55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323" gracePeriod=30 Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.228785 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="sbdb" containerID="cri-o://fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d" gracePeriod=30 Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.256915 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" containerID="cri-o://36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd" gracePeriod=30 Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.499519 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovnkube-controller/3.log" Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.502184 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovn-acl-logging/0.log" Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.502740 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovn-controller/0.log" Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.503147 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerID="36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd" exitCode=0 Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.503171 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerID="5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1" exitCode=0 Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.503179 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerID="778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8" exitCode=143 Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.503186 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerID="eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a" exitCode=143 Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.503202 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerDied","Data":"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd"} Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.503236 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerDied","Data":"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1"} Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.503248 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerDied","Data":"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8"} Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.503257 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerDied","Data":"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a"} Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.503276 5024 scope.go:117] "RemoveContainer" containerID="3035172001bc93fcffe16bca13eff1ab2b1f7787b508276f5ff358c509ad85dd" Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.505315 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4vh86_97cac632-c692-414d-b0cf-605f0bb7719b/kube-multus/2.log" Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.505758 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4vh86_97cac632-c692-414d-b0cf-605f0bb7719b/kube-multus/1.log" Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.505797 5024 generic.go:334] "Generic (PLEG): container finished" podID="97cac632-c692-414d-b0cf-605f0bb7719b" containerID="3a37dfec474ed39a219775a09f2e6b802a00e45a060e671c988f1e68293d49df" exitCode=2 Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.505828 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4vh86" event={"ID":"97cac632-c692-414d-b0cf-605f0bb7719b","Type":"ContainerDied","Data":"3a37dfec474ed39a219775a09f2e6b802a00e45a060e671c988f1e68293d49df"} Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.506296 5024 scope.go:117] "RemoveContainer" containerID="3a37dfec474ed39a219775a09f2e6b802a00e45a060e671c988f1e68293d49df" Nov 28 17:11:11 crc kubenswrapper[5024]: I1128 17:11:11.543788 5024 scope.go:117] "RemoveContainer" containerID="fddcf1223db1eb698e609489771d1fd1fd040bb9f4df3b4d69e38e8f827ee2b6" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.430625 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovn-acl-logging/0.log" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.431220 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovn-controller/0.log" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.431715 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.624561 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovn-acl-logging/0.log" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.625198 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-b2gbm_5b1542ec-e582-404b-8649-4a2a3e6ac398/ovn-controller/0.log" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.634787 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerID="fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d" exitCode=0 Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.634842 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerID="649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10" exitCode=0 Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.634855 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerID="4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654" exitCode=0 Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.634866 5024 generic.go:334] "Generic (PLEG): container finished" podID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerID="55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323" exitCode=0 Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.635038 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.635082 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerDied","Data":"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d"} Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.635128 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerDied","Data":"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10"} Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.635153 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerDied","Data":"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654"} Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.635190 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerDied","Data":"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323"} Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.635207 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b2gbm" event={"ID":"5b1542ec-e582-404b-8649-4a2a3e6ac398","Type":"ContainerDied","Data":"f4f82891a69ca3b29fdf2bf20318848ba35c6f27fca9f6352aaa055aaea660e0"} Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.635234 5024 scope.go:117] "RemoveContainer" containerID="36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.656278 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4vh86_97cac632-c692-414d-b0cf-605f0bb7719b/kube-multus/2.log" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.656831 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-ovn\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.656847 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4vh86" event={"ID":"97cac632-c692-414d-b0cf-605f0bb7719b","Type":"ContainerStarted","Data":"007e913fe301cbc4fae5c30505011f8ff354b56160e77716f1be40104643a55f"} Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.656959 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-systemd-units\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657021 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovnkube-script-lib\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657094 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-kubelet\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657126 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-slash\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657121 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657152 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-cni-bin\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657249 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvvzd\" (UniqueName: \"kubernetes.io/projected/5b1542ec-e582-404b-8649-4a2a3e6ac398-kube-api-access-lvvzd\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657256 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657305 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-openvswitch\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657307 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657356 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovnkube-config\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657378 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-systemd\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657400 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-node-log\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657428 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-etc-openvswitch\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657452 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovn-node-metrics-cert\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657494 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-env-overrides\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657485 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657525 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-run-ovn-kubernetes\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657544 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-slash" (OuterVolumeSpecName: "host-slash") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657583 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-var-lib-openvswitch\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657644 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-cni-netd\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657674 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-log-socket\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657706 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-run-netns\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.657731 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5b1542ec-e582-404b-8649-4a2a3e6ac398\" (UID: \"5b1542ec-e582-404b-8649-4a2a3e6ac398\") " Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.658209 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.660389 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.660477 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.660514 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.660561 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.660578 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-log-socket" (OuterVolumeSpecName: "log-socket") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.660596 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.660625 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.660477 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-node-log" (OuterVolumeSpecName: "node-log") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.661027 5024 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.661069 5024 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.661087 5024 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.661099 5024 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-slash\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.661139 5024 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.661180 5024 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.661192 5024 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-node-log\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.661203 5024 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.661223 5024 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.661235 5024 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.661245 5024 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.661259 5024 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-log-socket\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.661270 5024 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.661282 5024 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.664871 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.665029 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.674577 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.688297 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b1542ec-e582-404b-8649-4a2a3e6ac398-kube-api-access-lvvzd" (OuterVolumeSpecName: "kube-api-access-lvvzd") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "kube-api-access-lvvzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.688544 5024 scope.go:117] "RemoveContainer" containerID="fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.691626 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.710131 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5b1542ec-e582-404b-8649-4a2a3e6ac398" (UID: "5b1542ec-e582-404b-8649-4a2a3e6ac398"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.745298 5024 scope.go:117] "RemoveContainer" containerID="5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.763842 5024 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.763884 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvvzd\" (UniqueName: \"kubernetes.io/projected/5b1542ec-e582-404b-8649-4a2a3e6ac398-kube-api-access-lvvzd\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.763898 5024 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.763910 5024 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b1542ec-e582-404b-8649-4a2a3e6ac398-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.763919 5024 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b1542ec-e582-404b-8649-4a2a3e6ac398-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.763929 5024 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b1542ec-e582-404b-8649-4a2a3e6ac398-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772335 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l4xcf"] Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772622 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="nbdb" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772637 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="nbdb" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772648 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772654 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772664 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="kube-rbac-proxy-node" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772671 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="kube-rbac-proxy-node" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772683 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="sbdb" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772689 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="sbdb" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772696 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b87fe5-2e8a-4f1c-9ca4-4732b192d121" containerName="extract" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772702 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b87fe5-2e8a-4f1c-9ca4-4732b192d121" containerName="extract" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772715 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="kubecfg-setup" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772720 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="kubecfg-setup" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772733 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772741 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772749 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772755 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772763 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b87fe5-2e8a-4f1c-9ca4-4732b192d121" containerName="util" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772769 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b87fe5-2e8a-4f1c-9ca4-4732b192d121" containerName="util" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772777 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772783 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772792 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="northd" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772798 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="northd" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772806 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovn-acl-logging" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772811 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovn-acl-logging" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772822 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772828 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772838 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovn-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772845 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovn-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.772852 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b87fe5-2e8a-4f1c-9ca4-4732b192d121" containerName="pull" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772858 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b87fe5-2e8a-4f1c-9ca4-4732b192d121" containerName="pull" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.772986 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.773002 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovn-acl-logging" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.773013 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="sbdb" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.773022 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovn-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.773032 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="northd" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.773042 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.773054 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="kube-rbac-proxy-node" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.773081 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8b87fe5-2e8a-4f1c-9ca4-4732b192d121" containerName="extract" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.773090 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.773097 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="nbdb" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.773104 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.773201 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.773208 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.773308 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.773323 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" containerName="ovnkube-controller" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.790628 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.799327 5024 scope.go:117] "RemoveContainer" containerID="649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.838662 5024 scope.go:117] "RemoveContainer" containerID="4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.860593 5024 scope.go:117] "RemoveContainer" containerID="55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.880695 5024 scope.go:117] "RemoveContainer" containerID="778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.895356 5024 scope.go:117] "RemoveContainer" containerID="eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.913432 5024 scope.go:117] "RemoveContainer" containerID="c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.934640 5024 scope.go:117] "RemoveContainer" containerID="36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.935327 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd\": container with ID starting with 36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd not found: ID does not exist" containerID="36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.935373 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd"} err="failed to get container status \"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd\": rpc error: code = NotFound desc = could not find container \"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd\": container with ID starting with 36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.935399 5024 scope.go:117] "RemoveContainer" containerID="fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.939515 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\": container with ID starting with fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d not found: ID does not exist" containerID="fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.939593 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d"} err="failed to get container status \"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\": rpc error: code = NotFound desc = could not find container \"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\": container with ID starting with fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.939642 5024 scope.go:117] "RemoveContainer" containerID="5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.940115 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\": container with ID starting with 5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1 not found: ID does not exist" containerID="5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.940296 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1"} err="failed to get container status \"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\": rpc error: code = NotFound desc = could not find container \"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\": container with ID starting with 5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.940334 5024 scope.go:117] "RemoveContainer" containerID="649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.940636 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\": container with ID starting with 649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10 not found: ID does not exist" containerID="649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.940662 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10"} err="failed to get container status \"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\": rpc error: code = NotFound desc = could not find container \"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\": container with ID starting with 649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.940681 5024 scope.go:117] "RemoveContainer" containerID="4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.940953 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\": container with ID starting with 4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654 not found: ID does not exist" containerID="4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.940988 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654"} err="failed to get container status \"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\": rpc error: code = NotFound desc = could not find container \"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\": container with ID starting with 4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.941006 5024 scope.go:117] "RemoveContainer" containerID="55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.941273 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\": container with ID starting with 55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323 not found: ID does not exist" containerID="55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.941303 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323"} err="failed to get container status \"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\": rpc error: code = NotFound desc = could not find container \"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\": container with ID starting with 55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.941325 5024 scope.go:117] "RemoveContainer" containerID="778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.941571 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\": container with ID starting with 778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8 not found: ID does not exist" containerID="778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.941594 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8"} err="failed to get container status \"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\": rpc error: code = NotFound desc = could not find container \"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\": container with ID starting with 778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.941609 5024 scope.go:117] "RemoveContainer" containerID="eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.941815 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\": container with ID starting with eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a not found: ID does not exist" containerID="eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.941855 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a"} err="failed to get container status \"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\": rpc error: code = NotFound desc = could not find container \"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\": container with ID starting with eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.941873 5024 scope.go:117] "RemoveContainer" containerID="c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78" Nov 28 17:11:12 crc kubenswrapper[5024]: E1128 17:11:12.942130 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\": container with ID starting with c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78 not found: ID does not exist" containerID="c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.942153 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78"} err="failed to get container status \"c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\": rpc error: code = NotFound desc = could not find container \"c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\": container with ID starting with c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.942190 5024 scope.go:117] "RemoveContainer" containerID="36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.942430 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd"} err="failed to get container status \"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd\": rpc error: code = NotFound desc = could not find container \"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd\": container with ID starting with 36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.942472 5024 scope.go:117] "RemoveContainer" containerID="fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.942754 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d"} err="failed to get container status \"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\": rpc error: code = NotFound desc = could not find container \"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\": container with ID starting with fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.942773 5024 scope.go:117] "RemoveContainer" containerID="5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.945398 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1"} err="failed to get container status \"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\": rpc error: code = NotFound desc = could not find container \"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\": container with ID starting with 5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.945454 5024 scope.go:117] "RemoveContainer" containerID="649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.945896 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10"} err="failed to get container status \"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\": rpc error: code = NotFound desc = could not find container \"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\": container with ID starting with 649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.945928 5024 scope.go:117] "RemoveContainer" containerID="4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.946426 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654"} err="failed to get container status \"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\": rpc error: code = NotFound desc = could not find container \"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\": container with ID starting with 4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.946451 5024 scope.go:117] "RemoveContainer" containerID="55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.946700 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323"} err="failed to get container status \"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\": rpc error: code = NotFound desc = could not find container \"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\": container with ID starting with 55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.946722 5024 scope.go:117] "RemoveContainer" containerID="778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.948285 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8"} err="failed to get container status \"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\": rpc error: code = NotFound desc = could not find container \"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\": container with ID starting with 778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.948317 5024 scope.go:117] "RemoveContainer" containerID="eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.949054 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a"} err="failed to get container status \"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\": rpc error: code = NotFound desc = could not find container \"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\": container with ID starting with eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.949078 5024 scope.go:117] "RemoveContainer" containerID="c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.951667 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78"} err="failed to get container status \"c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\": rpc error: code = NotFound desc = could not find container \"c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\": container with ID starting with c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.951721 5024 scope.go:117] "RemoveContainer" containerID="36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.957417 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd"} err="failed to get container status \"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd\": rpc error: code = NotFound desc = could not find container \"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd\": container with ID starting with 36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.957496 5024 scope.go:117] "RemoveContainer" containerID="fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.959328 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d"} err="failed to get container status \"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\": rpc error: code = NotFound desc = could not find container \"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\": container with ID starting with fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.959396 5024 scope.go:117] "RemoveContainer" containerID="5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.959844 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1"} err="failed to get container status \"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\": rpc error: code = NotFound desc = could not find container \"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\": container with ID starting with 5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.959894 5024 scope.go:117] "RemoveContainer" containerID="649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.961246 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10"} err="failed to get container status \"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\": rpc error: code = NotFound desc = could not find container \"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\": container with ID starting with 649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.961300 5024 scope.go:117] "RemoveContainer" containerID="4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.961623 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654"} err="failed to get container status \"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\": rpc error: code = NotFound desc = could not find container \"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\": container with ID starting with 4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.961649 5024 scope.go:117] "RemoveContainer" containerID="55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.961847 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323"} err="failed to get container status \"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\": rpc error: code = NotFound desc = could not find container \"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\": container with ID starting with 55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.961871 5024 scope.go:117] "RemoveContainer" containerID="778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.962091 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8"} err="failed to get container status \"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\": rpc error: code = NotFound desc = could not find container \"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\": container with ID starting with 778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.962633 5024 scope.go:117] "RemoveContainer" containerID="eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.963174 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a"} err="failed to get container status \"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\": rpc error: code = NotFound desc = could not find container \"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\": container with ID starting with eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.963199 5024 scope.go:117] "RemoveContainer" containerID="c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.963494 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78"} err="failed to get container status \"c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\": rpc error: code = NotFound desc = could not find container \"c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\": container with ID starting with c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.963561 5024 scope.go:117] "RemoveContainer" containerID="36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.963879 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd"} err="failed to get container status \"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd\": rpc error: code = NotFound desc = could not find container \"36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd\": container with ID starting with 36b4ec3008b906157d2373fc1fcfe6cb10bc88054e058dda362703c6d0e37bbd not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.963930 5024 scope.go:117] "RemoveContainer" containerID="fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.964183 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d"} err="failed to get container status \"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\": rpc error: code = NotFound desc = could not find container \"fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d\": container with ID starting with fc19499d1042faf092cbb9e25b709c41d7215f743bcfdb0c16bcf9bc4085910d not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.964228 5024 scope.go:117] "RemoveContainer" containerID="5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.964543 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1"} err="failed to get container status \"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\": rpc error: code = NotFound desc = could not find container \"5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1\": container with ID starting with 5a2c1a13f2741e6f46b23f50a326589e4eca82b14d454890fc940fa0655f94d1 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.964598 5024 scope.go:117] "RemoveContainer" containerID="649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.964893 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10"} err="failed to get container status \"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\": rpc error: code = NotFound desc = could not find container \"649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10\": container with ID starting with 649f8b6a5fc379158fb7fe5f4a08da2477f33e1d661c57077e7c7d3136d05e10 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.964914 5024 scope.go:117] "RemoveContainer" containerID="4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.965253 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654"} err="failed to get container status \"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\": rpc error: code = NotFound desc = could not find container \"4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654\": container with ID starting with 4ef45925ed61f2b0f0eb63f86e1acbef61874f97c2469fbc25dfb8cf3e460654 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.965275 5024 scope.go:117] "RemoveContainer" containerID="55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.965594 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323"} err="failed to get container status \"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\": rpc error: code = NotFound desc = could not find container \"55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323\": container with ID starting with 55d37eb63290d17471ed41bba6e570908deaecc2fd1151b5f162d9dd3d896323 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.965619 5024 scope.go:117] "RemoveContainer" containerID="778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.965947 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8"} err="failed to get container status \"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\": rpc error: code = NotFound desc = could not find container \"778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8\": container with ID starting with 778d1a3c6d60dbf80e35c798c94e358f9b590e78e70d8e9fce3b208cb272b7c8 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.965985 5024 scope.go:117] "RemoveContainer" containerID="eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.966309 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a"} err="failed to get container status \"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\": rpc error: code = NotFound desc = could not find container \"eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a\": container with ID starting with eb2bf591c4f30ed4d00f0cb8748cf17dc8652fd0dc420535fd7cc7898980419a not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.966336 5024 scope.go:117] "RemoveContainer" containerID="c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.966572 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78"} err="failed to get container status \"c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\": rpc error: code = NotFound desc = could not find container \"c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78\": container with ID starting with c7eb6bbd65b4e3bb7abcbe04ce046479a17339e185dd3996d3a7382c6ed03c78 not found: ID does not exist" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.968441 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-node-log\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.968492 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-etc-openvswitch\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.968517 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-cni-bin\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.968599 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-run-ovn-kubernetes\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.968662 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-slash\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.968697 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-run-ovn\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.968729 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-run-openvswitch\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.968756 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmvcv\" (UniqueName: \"kubernetes.io/projected/7baa6139-5a88-4017-a3b3-2ed48b133773-kube-api-access-rmvcv\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.968791 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-run-netns\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.968818 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-cni-netd\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.968866 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-var-lib-openvswitch\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.968920 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.968950 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7baa6139-5a88-4017-a3b3-2ed48b133773-ovnkube-script-lib\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.968990 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-run-systemd\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.969033 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-systemd-units\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.969169 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7baa6139-5a88-4017-a3b3-2ed48b133773-env-overrides\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.969231 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-log-socket\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.969842 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7baa6139-5a88-4017-a3b3-2ed48b133773-ovnkube-config\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.969945 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-kubelet\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.970005 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7baa6139-5a88-4017-a3b3-2ed48b133773-ovn-node-metrics-cert\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.974912 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b2gbm"] Nov 28 17:11:12 crc kubenswrapper[5024]: I1128 17:11:12.978826 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b2gbm"] Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071283 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-kubelet\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071357 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7baa6139-5a88-4017-a3b3-2ed48b133773-ovn-node-metrics-cert\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071391 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-node-log\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071413 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-etc-openvswitch\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071476 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-kubelet\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071523 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-cni-bin\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071554 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-run-ovn-kubernetes\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071504 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-node-log\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071604 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-cni-bin\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071621 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-etc-openvswitch\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071663 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-slash\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071722 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-slash\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071737 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-run-ovn\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071758 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-run-ovn\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071782 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-run-ovn-kubernetes\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071860 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-run-openvswitch\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071899 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-run-openvswitch\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071930 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmvcv\" (UniqueName: \"kubernetes.io/projected/7baa6139-5a88-4017-a3b3-2ed48b133773-kube-api-access-rmvcv\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.071983 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-run-netns\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072044 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-cni-netd\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072090 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-var-lib-openvswitch\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072230 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072261 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7baa6139-5a88-4017-a3b3-2ed48b133773-ovnkube-script-lib\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072343 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-run-systemd\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072397 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072412 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-cni-netd\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072468 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-var-lib-openvswitch\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072442 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-systemd-units\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072411 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-systemd-units\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072531 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-host-run-netns\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072566 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-run-systemd\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072664 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7baa6139-5a88-4017-a3b3-2ed48b133773-env-overrides\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072753 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-log-socket\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072805 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7baa6139-5a88-4017-a3b3-2ed48b133773-ovnkube-config\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.072806 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7baa6139-5a88-4017-a3b3-2ed48b133773-log-socket\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.073514 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7baa6139-5a88-4017-a3b3-2ed48b133773-env-overrides\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.073527 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7baa6139-5a88-4017-a3b3-2ed48b133773-ovnkube-script-lib\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.073964 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7baa6139-5a88-4017-a3b3-2ed48b133773-ovnkube-config\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.077798 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7baa6139-5a88-4017-a3b3-2ed48b133773-ovn-node-metrics-cert\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.100978 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmvcv\" (UniqueName: \"kubernetes.io/projected/7baa6139-5a88-4017-a3b3-2ed48b133773-kube-api-access-rmvcv\") pod \"ovnkube-node-l4xcf\" (UID: \"7baa6139-5a88-4017-a3b3-2ed48b133773\") " pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.109371 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.667564 5024 generic.go:334] "Generic (PLEG): container finished" podID="7baa6139-5a88-4017-a3b3-2ed48b133773" containerID="a0d622a664e3145e8d84218d64505946494b4b9333ffa758eb9926f634f1c972" exitCode=0 Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.667778 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" event={"ID":"7baa6139-5a88-4017-a3b3-2ed48b133773","Type":"ContainerDied","Data":"a0d622a664e3145e8d84218d64505946494b4b9333ffa758eb9926f634f1c972"} Nov 28 17:11:13 crc kubenswrapper[5024]: I1128 17:11:13.668080 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" event={"ID":"7baa6139-5a88-4017-a3b3-2ed48b133773","Type":"ContainerStarted","Data":"83a2ee5af5b0ffcf6840d81840187a5e8bc3096e266853b0807ffc5fa50c66c8"} Nov 28 17:11:14 crc kubenswrapper[5024]: I1128 17:11:14.509568 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b1542ec-e582-404b-8649-4a2a3e6ac398" path="/var/lib/kubelet/pods/5b1542ec-e582-404b-8649-4a2a3e6ac398/volumes" Nov 28 17:11:14 crc kubenswrapper[5024]: I1128 17:11:14.690500 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" event={"ID":"7baa6139-5a88-4017-a3b3-2ed48b133773","Type":"ContainerStarted","Data":"1e9c7be99efc8db026f244731b2d2ca833ad27727a18ae62dca8efb5f7783511"} Nov 28 17:11:14 crc kubenswrapper[5024]: I1128 17:11:14.690568 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" event={"ID":"7baa6139-5a88-4017-a3b3-2ed48b133773","Type":"ContainerStarted","Data":"9f88d1a1238c151bb124730176281357bbba96aab1a8e274b76d2e2a20d5ccaa"} Nov 28 17:11:14 crc kubenswrapper[5024]: I1128 17:11:14.690583 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" event={"ID":"7baa6139-5a88-4017-a3b3-2ed48b133773","Type":"ContainerStarted","Data":"4675236134f617243a7e3bb2c03f94363c381dd2e5779cc76d69df6771d8902b"} Nov 28 17:11:15 crc kubenswrapper[5024]: I1128 17:11:15.702128 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" event={"ID":"7baa6139-5a88-4017-a3b3-2ed48b133773","Type":"ContainerStarted","Data":"fd4742d43381d28b1550d5cba9054cbe328f39301ff266d248b45c537e7a6f8c"} Nov 28 17:11:15 crc kubenswrapper[5024]: I1128 17:11:15.702466 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" event={"ID":"7baa6139-5a88-4017-a3b3-2ed48b133773","Type":"ContainerStarted","Data":"d57f4cbe4568b09938c41efee245f700c9988e7b5d4fa418217a370c710be2b6"} Nov 28 17:11:15 crc kubenswrapper[5024]: I1128 17:11:15.702480 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" event={"ID":"7baa6139-5a88-4017-a3b3-2ed48b133773","Type":"ContainerStarted","Data":"16216a1b37c66a235fa83058df617a29b5548f02ac2119a32d8a319537627bb9"} Nov 28 17:11:17 crc kubenswrapper[5024]: I1128 17:11:17.718460 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" event={"ID":"7baa6139-5a88-4017-a3b3-2ed48b133773","Type":"ContainerStarted","Data":"7636ac9b6a139df4c9534322d37df4dcb83768d55d2fc9850e9cca93af5ff558"} Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.691863 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn"] Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.693244 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.697735 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.697735 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.701100 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-ghfch" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.750469 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df27g\" (UniqueName: \"kubernetes.io/projected/9f64c6e9-5a4e-4c00-b8c0-f88418c1b290-kube-api-access-df27g\") pod \"obo-prometheus-operator-668cf9dfbb-z5jkn\" (UID: \"9f64c6e9-5a4e-4c00-b8c0-f88418c1b290\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.770692 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc"] Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.772048 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.774385 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.774384 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-zdgcf" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.785372 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2"] Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.786482 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.879600 5024 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.888342 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df27g\" (UniqueName: \"kubernetes.io/projected/9f64c6e9-5a4e-4c00-b8c0-f88418c1b290-kube-api-access-df27g\") pod \"obo-prometheus-operator-668cf9dfbb-z5jkn\" (UID: \"9f64c6e9-5a4e-4c00-b8c0-f88418c1b290\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.888391 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47a5fd85-fd8e-4b0f-84b0-9c00154e2654-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2\" (UID: \"47a5fd85-fd8e-4b0f-84b0-9c00154e2654\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.888446 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47a5fd85-fd8e-4b0f-84b0-9c00154e2654-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2\" (UID: \"47a5fd85-fd8e-4b0f-84b0-9c00154e2654\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.916750 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df27g\" (UniqueName: \"kubernetes.io/projected/9f64c6e9-5a4e-4c00-b8c0-f88418c1b290-kube-api-access-df27g\") pod \"obo-prometheus-operator-668cf9dfbb-z5jkn\" (UID: \"9f64c6e9-5a4e-4c00-b8c0-f88418c1b290\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.989421 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47a5fd85-fd8e-4b0f-84b0-9c00154e2654-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2\" (UID: \"47a5fd85-fd8e-4b0f-84b0-9c00154e2654\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.989508 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c22be4d1-2db0-48de-9439-c24282cf63b8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc\" (UID: \"c22be4d1-2db0-48de-9439-c24282cf63b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.989579 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c22be4d1-2db0-48de-9439-c24282cf63b8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc\" (UID: \"c22be4d1-2db0-48de-9439-c24282cf63b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.989601 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47a5fd85-fd8e-4b0f-84b0-9c00154e2654-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2\" (UID: \"47a5fd85-fd8e-4b0f-84b0-9c00154e2654\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.993366 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47a5fd85-fd8e-4b0f-84b0-9c00154e2654-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2\" (UID: \"47a5fd85-fd8e-4b0f-84b0-9c00154e2654\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.996150 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-h25jc"] Nov 28 17:11:18 crc kubenswrapper[5024]: I1128 17:11:18.998643 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.004121 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-999n5" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.004205 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.010268 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47a5fd85-fd8e-4b0f-84b0-9c00154e2654-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2\" (UID: \"47a5fd85-fd8e-4b0f-84b0-9c00154e2654\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.017544 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.086173 5024 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-z5jkn_openshift-operators_9f64c6e9-5a4e-4c00-b8c0-f88418c1b290_0(2472b7b1c5d418898b1c74f607c0d44f97a9340fb2b93e99c042cab02a2567cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.086252 5024 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-z5jkn_openshift-operators_9f64c6e9-5a4e-4c00-b8c0-f88418c1b290_0(2472b7b1c5d418898b1c74f607c0d44f97a9340fb2b93e99c042cab02a2567cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.086279 5024 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-z5jkn_openshift-operators_9f64c6e9-5a4e-4c00-b8c0-f88418c1b290_0(2472b7b1c5d418898b1c74f607c0d44f97a9340fb2b93e99c042cab02a2567cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.086334 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-z5jkn_openshift-operators(9f64c6e9-5a4e-4c00-b8c0-f88418c1b290)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-z5jkn_openshift-operators(9f64c6e9-5a4e-4c00-b8c0-f88418c1b290)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-z5jkn_openshift-operators_9f64c6e9-5a4e-4c00-b8c0-f88418c1b290_0(2472b7b1c5d418898b1c74f607c0d44f97a9340fb2b93e99c042cab02a2567cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" podUID="9f64c6e9-5a4e-4c00-b8c0-f88418c1b290" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.090398 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c22be4d1-2db0-48de-9439-c24282cf63b8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc\" (UID: \"c22be4d1-2db0-48de-9439-c24282cf63b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.090494 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c22be4d1-2db0-48de-9439-c24282cf63b8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc\" (UID: \"c22be4d1-2db0-48de-9439-c24282cf63b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.095826 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c22be4d1-2db0-48de-9439-c24282cf63b8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc\" (UID: \"c22be4d1-2db0-48de-9439-c24282cf63b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.097870 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c22be4d1-2db0-48de-9439-c24282cf63b8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc\" (UID: \"c22be4d1-2db0-48de-9439-c24282cf63b8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.103501 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.143204 5024 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2_openshift-operators_47a5fd85-fd8e-4b0f-84b0-9c00154e2654_0(58c351106e463e89ad53458b224b944399b50a0975afba2b6854a4c5f14870ac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.143283 5024 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2_openshift-operators_47a5fd85-fd8e-4b0f-84b0-9c00154e2654_0(58c351106e463e89ad53458b224b944399b50a0975afba2b6854a4c5f14870ac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.143308 5024 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2_openshift-operators_47a5fd85-fd8e-4b0f-84b0-9c00154e2654_0(58c351106e463e89ad53458b224b944399b50a0975afba2b6854a4c5f14870ac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.143365 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2_openshift-operators(47a5fd85-fd8e-4b0f-84b0-9c00154e2654)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2_openshift-operators(47a5fd85-fd8e-4b0f-84b0-9c00154e2654)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2_openshift-operators_47a5fd85-fd8e-4b0f-84b0-9c00154e2654_0(58c351106e463e89ad53458b224b944399b50a0975afba2b6854a4c5f14870ac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" podUID="47a5fd85-fd8e-4b0f-84b0-9c00154e2654" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.181535 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-7l9j5"] Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.188691 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.192620 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e5a62fe-852d-487a-ae2e-852fc2a21d22-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-h25jc\" (UID: \"7e5a62fe-852d-487a-ae2e-852fc2a21d22\") " pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.192699 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcs9d\" (UniqueName: \"kubernetes.io/projected/7e5a62fe-852d-487a-ae2e-852fc2a21d22-kube-api-access-mcs9d\") pod \"observability-operator-d8bb48f5d-h25jc\" (UID: \"7e5a62fe-852d-487a-ae2e-852fc2a21d22\") " pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.193880 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-wd6zd" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.293989 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e5a62fe-852d-487a-ae2e-852fc2a21d22-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-h25jc\" (UID: \"7e5a62fe-852d-487a-ae2e-852fc2a21d22\") " pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.294082 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2427c8bc-48b6-42d2-b7fa-3a1493e45095-openshift-service-ca\") pod \"perses-operator-5446b9c989-7l9j5\" (UID: \"2427c8bc-48b6-42d2-b7fa-3a1493e45095\") " pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.294111 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc28h\" (UniqueName: \"kubernetes.io/projected/2427c8bc-48b6-42d2-b7fa-3a1493e45095-kube-api-access-lc28h\") pod \"perses-operator-5446b9c989-7l9j5\" (UID: \"2427c8bc-48b6-42d2-b7fa-3a1493e45095\") " pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.294144 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcs9d\" (UniqueName: \"kubernetes.io/projected/7e5a62fe-852d-487a-ae2e-852fc2a21d22-kube-api-access-mcs9d\") pod \"observability-operator-d8bb48f5d-h25jc\" (UID: \"7e5a62fe-852d-487a-ae2e-852fc2a21d22\") " pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.298922 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e5a62fe-852d-487a-ae2e-852fc2a21d22-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-h25jc\" (UID: \"7e5a62fe-852d-487a-ae2e-852fc2a21d22\") " pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.325782 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcs9d\" (UniqueName: \"kubernetes.io/projected/7e5a62fe-852d-487a-ae2e-852fc2a21d22-kube-api-access-mcs9d\") pod \"observability-operator-d8bb48f5d-h25jc\" (UID: \"7e5a62fe-852d-487a-ae2e-852fc2a21d22\") " pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.388761 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.396348 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2427c8bc-48b6-42d2-b7fa-3a1493e45095-openshift-service-ca\") pod \"perses-operator-5446b9c989-7l9j5\" (UID: \"2427c8bc-48b6-42d2-b7fa-3a1493e45095\") " pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.397215 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc28h\" (UniqueName: \"kubernetes.io/projected/2427c8bc-48b6-42d2-b7fa-3a1493e45095-kube-api-access-lc28h\") pod \"perses-operator-5446b9c989-7l9j5\" (UID: \"2427c8bc-48b6-42d2-b7fa-3a1493e45095\") " pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.397162 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2427c8bc-48b6-42d2-b7fa-3a1493e45095-openshift-service-ca\") pod \"perses-operator-5446b9c989-7l9j5\" (UID: \"2427c8bc-48b6-42d2-b7fa-3a1493e45095\") " pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.409201 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.419876 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc28h\" (UniqueName: \"kubernetes.io/projected/2427c8bc-48b6-42d2-b7fa-3a1493e45095-kube-api-access-lc28h\") pod \"perses-operator-5446b9c989-7l9j5\" (UID: \"2427c8bc-48b6-42d2-b7fa-3a1493e45095\") " pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.472614 5024 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc_openshift-operators_c22be4d1-2db0-48de-9439-c24282cf63b8_0(0c8bde2026320bbfb26b2fde3e2cc8a8bb63ff6cdbc8b568866c1ad07889cafb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.472703 5024 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc_openshift-operators_c22be4d1-2db0-48de-9439-c24282cf63b8_0(0c8bde2026320bbfb26b2fde3e2cc8a8bb63ff6cdbc8b568866c1ad07889cafb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.472729 5024 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc_openshift-operators_c22be4d1-2db0-48de-9439-c24282cf63b8_0(0c8bde2026320bbfb26b2fde3e2cc8a8bb63ff6cdbc8b568866c1ad07889cafb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.472793 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc_openshift-operators(c22be4d1-2db0-48de-9439-c24282cf63b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc_openshift-operators(c22be4d1-2db0-48de-9439-c24282cf63b8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc_openshift-operators_c22be4d1-2db0-48de-9439-c24282cf63b8_0(0c8bde2026320bbfb26b2fde3e2cc8a8bb63ff6cdbc8b568866c1ad07889cafb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" podUID="c22be4d1-2db0-48de-9439-c24282cf63b8" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.488336 5024 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-h25jc_openshift-operators_7e5a62fe-852d-487a-ae2e-852fc2a21d22_0(5639f9ff473b91a6fbdf8dbd57eb7193317a8f4d70e72c2292b39f1059719ba6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.488422 5024 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-h25jc_openshift-operators_7e5a62fe-852d-487a-ae2e-852fc2a21d22_0(5639f9ff473b91a6fbdf8dbd57eb7193317a8f4d70e72c2292b39f1059719ba6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.488456 5024 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-h25jc_openshift-operators_7e5a62fe-852d-487a-ae2e-852fc2a21d22_0(5639f9ff473b91a6fbdf8dbd57eb7193317a8f4d70e72c2292b39f1059719ba6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.488521 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-h25jc_openshift-operators(7e5a62fe-852d-487a-ae2e-852fc2a21d22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-h25jc_openshift-operators(7e5a62fe-852d-487a-ae2e-852fc2a21d22)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-h25jc_openshift-operators_7e5a62fe-852d-487a-ae2e-852fc2a21d22_0(5639f9ff473b91a6fbdf8dbd57eb7193317a8f4d70e72c2292b39f1059719ba6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" podUID="7e5a62fe-852d-487a-ae2e-852fc2a21d22" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.508660 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.538259 5024 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-7l9j5_openshift-operators_2427c8bc-48b6-42d2-b7fa-3a1493e45095_0(af8562171e9b75a536008923275a8dc73b6087b5871ef4c6b1c896df4c744dd4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.538348 5024 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-7l9j5_openshift-operators_2427c8bc-48b6-42d2-b7fa-3a1493e45095_0(af8562171e9b75a536008923275a8dc73b6087b5871ef4c6b1c896df4c744dd4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.538385 5024 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-7l9j5_openshift-operators_2427c8bc-48b6-42d2-b7fa-3a1493e45095_0(af8562171e9b75a536008923275a8dc73b6087b5871ef4c6b1c896df4c744dd4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:19 crc kubenswrapper[5024]: E1128 17:11:19.538448 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-7l9j5_openshift-operators(2427c8bc-48b6-42d2-b7fa-3a1493e45095)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-7l9j5_openshift-operators(2427c8bc-48b6-42d2-b7fa-3a1493e45095)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-7l9j5_openshift-operators_2427c8bc-48b6-42d2-b7fa-3a1493e45095_0(af8562171e9b75a536008923275a8dc73b6087b5871ef4c6b1c896df4c744dd4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" podUID="2427c8bc-48b6-42d2-b7fa-3a1493e45095" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.762242 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" event={"ID":"7baa6139-5a88-4017-a3b3-2ed48b133773","Type":"ContainerStarted","Data":"e2e3346770e6ad6249efb0bee35a4eebeb926e0dcd920c53a0b1211223041fd7"} Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.762571 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.762752 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.762798 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.800995 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" podStartSLOduration=7.800977126 podStartE2EDuration="7.800977126s" podCreationTimestamp="2025-11-28 17:11:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:11:19.79975529 +0000 UTC m=+781.848676195" watchObservedRunningTime="2025-11-28 17:11:19.800977126 +0000 UTC m=+781.849898031" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.812207 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:19 crc kubenswrapper[5024]: I1128 17:11:19.821469 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.745765 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-h25jc"] Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.746193 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.746679 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.780362 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2"] Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.780511 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.781106 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.789234 5024 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-h25jc_openshift-operators_7e5a62fe-852d-487a-ae2e-852fc2a21d22_0(365e64a45da85e065d224fa6d3b233a23742cc12134a32dcd3f4ff8aa1154b34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.789330 5024 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-h25jc_openshift-operators_7e5a62fe-852d-487a-ae2e-852fc2a21d22_0(365e64a45da85e065d224fa6d3b233a23742cc12134a32dcd3f4ff8aa1154b34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.789359 5024 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-h25jc_openshift-operators_7e5a62fe-852d-487a-ae2e-852fc2a21d22_0(365e64a45da85e065d224fa6d3b233a23742cc12134a32dcd3f4ff8aa1154b34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.789419 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-h25jc_openshift-operators(7e5a62fe-852d-487a-ae2e-852fc2a21d22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-h25jc_openshift-operators(7e5a62fe-852d-487a-ae2e-852fc2a21d22)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-h25jc_openshift-operators_7e5a62fe-852d-487a-ae2e-852fc2a21d22_0(365e64a45da85e065d224fa6d3b233a23742cc12134a32dcd3f4ff8aa1154b34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" podUID="7e5a62fe-852d-487a-ae2e-852fc2a21d22" Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.832713 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-7l9j5"] Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.832845 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.833427 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.836284 5024 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2_openshift-operators_47a5fd85-fd8e-4b0f-84b0-9c00154e2654_0(e5238667c911f99e5f78709ea7407c0b84794478de218c3a4c495046e9333711): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.836393 5024 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2_openshift-operators_47a5fd85-fd8e-4b0f-84b0-9c00154e2654_0(e5238667c911f99e5f78709ea7407c0b84794478de218c3a4c495046e9333711): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.836435 5024 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2_openshift-operators_47a5fd85-fd8e-4b0f-84b0-9c00154e2654_0(e5238667c911f99e5f78709ea7407c0b84794478de218c3a4c495046e9333711): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.836502 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2_openshift-operators(47a5fd85-fd8e-4b0f-84b0-9c00154e2654)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2_openshift-operators(47a5fd85-fd8e-4b0f-84b0-9c00154e2654)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2_openshift-operators_47a5fd85-fd8e-4b0f-84b0-9c00154e2654_0(e5238667c911f99e5f78709ea7407c0b84794478de218c3a4c495046e9333711): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" podUID="47a5fd85-fd8e-4b0f-84b0-9c00154e2654" Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.856653 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc"] Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.856806 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.857627 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.884380 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn"] Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.884532 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" Nov 28 17:11:20 crc kubenswrapper[5024]: I1128 17:11:20.885321 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.972231 5024 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc_openshift-operators_c22be4d1-2db0-48de-9439-c24282cf63b8_0(4a3008ae50a815e4360ed74bc20e876d89116e1c171131108db0671e2facdf23): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.972316 5024 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc_openshift-operators_c22be4d1-2db0-48de-9439-c24282cf63b8_0(4a3008ae50a815e4360ed74bc20e876d89116e1c171131108db0671e2facdf23): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.972345 5024 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc_openshift-operators_c22be4d1-2db0-48de-9439-c24282cf63b8_0(4a3008ae50a815e4360ed74bc20e876d89116e1c171131108db0671e2facdf23): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.972398 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc_openshift-operators(c22be4d1-2db0-48de-9439-c24282cf63b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc_openshift-operators(c22be4d1-2db0-48de-9439-c24282cf63b8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc_openshift-operators_c22be4d1-2db0-48de-9439-c24282cf63b8_0(4a3008ae50a815e4360ed74bc20e876d89116e1c171131108db0671e2facdf23): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" podUID="c22be4d1-2db0-48de-9439-c24282cf63b8" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.977680 5024 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-7l9j5_openshift-operators_2427c8bc-48b6-42d2-b7fa-3a1493e45095_0(e9f289d8ae5199e2e7bb226eae4826cdc2c0dd4077a3dde92f9b6515c3865713): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.977906 5024 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-7l9j5_openshift-operators_2427c8bc-48b6-42d2-b7fa-3a1493e45095_0(e9f289d8ae5199e2e7bb226eae4826cdc2c0dd4077a3dde92f9b6515c3865713): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.977953 5024 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-7l9j5_openshift-operators_2427c8bc-48b6-42d2-b7fa-3a1493e45095_0(e9f289d8ae5199e2e7bb226eae4826cdc2c0dd4077a3dde92f9b6515c3865713): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:20 crc kubenswrapper[5024]: E1128 17:11:20.986098 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-7l9j5_openshift-operators(2427c8bc-48b6-42d2-b7fa-3a1493e45095)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-7l9j5_openshift-operators(2427c8bc-48b6-42d2-b7fa-3a1493e45095)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-7l9j5_openshift-operators_2427c8bc-48b6-42d2-b7fa-3a1493e45095_0(e9f289d8ae5199e2e7bb226eae4826cdc2c0dd4077a3dde92f9b6515c3865713): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" podUID="2427c8bc-48b6-42d2-b7fa-3a1493e45095" Nov 28 17:11:21 crc kubenswrapper[5024]: E1128 17:11:21.001294 5024 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-z5jkn_openshift-operators_9f64c6e9-5a4e-4c00-b8c0-f88418c1b290_0(e418c253640aeb0a93a420c9ade8ebf90f2a1738b81e11ad382e1e37e8832ad8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:11:21 crc kubenswrapper[5024]: E1128 17:11:21.001437 5024 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-z5jkn_openshift-operators_9f64c6e9-5a4e-4c00-b8c0-f88418c1b290_0(e418c253640aeb0a93a420c9ade8ebf90f2a1738b81e11ad382e1e37e8832ad8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" Nov 28 17:11:21 crc kubenswrapper[5024]: E1128 17:11:21.001463 5024 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-z5jkn_openshift-operators_9f64c6e9-5a4e-4c00-b8c0-f88418c1b290_0(e418c253640aeb0a93a420c9ade8ebf90f2a1738b81e11ad382e1e37e8832ad8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" Nov 28 17:11:21 crc kubenswrapper[5024]: E1128 17:11:21.001517 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-z5jkn_openshift-operators(9f64c6e9-5a4e-4c00-b8c0-f88418c1b290)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-z5jkn_openshift-operators(9f64c6e9-5a4e-4c00-b8c0-f88418c1b290)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-z5jkn_openshift-operators_9f64c6e9-5a4e-4c00-b8c0-f88418c1b290_0(e418c253640aeb0a93a420c9ade8ebf90f2a1738b81e11ad382e1e37e8832ad8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" podUID="9f64c6e9-5a4e-4c00-b8c0-f88418c1b290" Nov 28 17:11:32 crc kubenswrapper[5024]: I1128 17:11:32.497304 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:32 crc kubenswrapper[5024]: I1128 17:11:32.497550 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" Nov 28 17:11:32 crc kubenswrapper[5024]: I1128 17:11:32.498615 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" Nov 28 17:11:32 crc kubenswrapper[5024]: I1128 17:11:32.498980 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" Nov 28 17:11:33 crc kubenswrapper[5024]: I1128 17:11:33.001420 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2"] Nov 28 17:11:33 crc kubenswrapper[5024]: I1128 17:11:33.035698 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn"] Nov 28 17:11:33 crc kubenswrapper[5024]: I1128 17:11:33.888803 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" event={"ID":"9f64c6e9-5a4e-4c00-b8c0-f88418c1b290","Type":"ContainerStarted","Data":"93d316e1e2758f353c7867dd09fabf69f9f5df62eab7c9f2499d9e4c1e32a25d"} Nov 28 17:11:33 crc kubenswrapper[5024]: I1128 17:11:33.890976 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" event={"ID":"47a5fd85-fd8e-4b0f-84b0-9c00154e2654","Type":"ContainerStarted","Data":"a4cb06edbb2d2dbece4a8be024eed1165b72d44ce086bb175cba5a4c1b6703ae"} Nov 28 17:11:34 crc kubenswrapper[5024]: I1128 17:11:34.497451 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:34 crc kubenswrapper[5024]: I1128 17:11:34.497514 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:34 crc kubenswrapper[5024]: I1128 17:11:34.498739 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:34 crc kubenswrapper[5024]: I1128 17:11:34.498839 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" Nov 28 17:11:34 crc kubenswrapper[5024]: I1128 17:11:34.875965 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc"] Nov 28 17:11:34 crc kubenswrapper[5024]: I1128 17:11:34.924994 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" event={"ID":"c22be4d1-2db0-48de-9439-c24282cf63b8","Type":"ContainerStarted","Data":"d98f318a601360ff3d4f34825fba8f798b6c3a1972bad59cd693bfd9153ae3d7"} Nov 28 17:11:34 crc kubenswrapper[5024]: I1128 17:11:34.969558 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-7l9j5"] Nov 28 17:11:35 crc kubenswrapper[5024]: I1128 17:11:35.497768 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:35 crc kubenswrapper[5024]: I1128 17:11:35.498688 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:35 crc kubenswrapper[5024]: I1128 17:11:35.800608 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-h25jc"] Nov 28 17:11:35 crc kubenswrapper[5024]: I1128 17:11:35.934571 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" event={"ID":"2427c8bc-48b6-42d2-b7fa-3a1493e45095","Type":"ContainerStarted","Data":"c0a9151ae2fad693bb1c4e11e3466e97c6ccc073bd0ae9f8a18aefc6ba1ab67b"} Nov 28 17:11:35 crc kubenswrapper[5024]: I1128 17:11:35.938285 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" event={"ID":"7e5a62fe-852d-487a-ae2e-852fc2a21d22","Type":"ContainerStarted","Data":"42cd2f9915cdbbf91fa88f7dbf16fef28c204f903911ff9d0b3acabd6be5e542"} Nov 28 17:11:37 crc kubenswrapper[5024]: I1128 17:11:37.567575 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:11:37 crc kubenswrapper[5024]: I1128 17:11:37.567683 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:11:43 crc kubenswrapper[5024]: I1128 17:11:43.142145 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l4xcf" Nov 28 17:11:51 crc kubenswrapper[5024]: I1128 17:11:51.055457 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" event={"ID":"47a5fd85-fd8e-4b0f-84b0-9c00154e2654","Type":"ContainerStarted","Data":"e4123b64a152c95096f37e4b00e95d98f4d0b5051ab89da6da6500a23bcfb97d"} Nov 28 17:11:51 crc kubenswrapper[5024]: I1128 17:11:51.057138 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" event={"ID":"c22be4d1-2db0-48de-9439-c24282cf63b8","Type":"ContainerStarted","Data":"ff059f333781fe2205794b12935b164d057f18be286f529985410cb2cbfba698"} Nov 28 17:11:51 crc kubenswrapper[5024]: I1128 17:11:51.059250 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" event={"ID":"2427c8bc-48b6-42d2-b7fa-3a1493e45095","Type":"ContainerStarted","Data":"75640f296c9f41c11a98e0e7075a23b88909cbae9b8689417b2d8e66f08d9c0c"} Nov 28 17:11:51 crc kubenswrapper[5024]: I1128 17:11:51.059501 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:51 crc kubenswrapper[5024]: I1128 17:11:51.061757 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" event={"ID":"9f64c6e9-5a4e-4c00-b8c0-f88418c1b290","Type":"ContainerStarted","Data":"ac417583bad47777c69e7997651a3a185bbc620d2ed1426f1b52f97240c28ea1"} Nov 28 17:11:51 crc kubenswrapper[5024]: I1128 17:11:51.063675 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" event={"ID":"7e5a62fe-852d-487a-ae2e-852fc2a21d22","Type":"ContainerStarted","Data":"b65aeb326871f8146699589cdb9130e1e40cd2ecca9c535a4d5fbd3dbf69fc4d"} Nov 28 17:11:51 crc kubenswrapper[5024]: I1128 17:11:51.063987 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:51 crc kubenswrapper[5024]: I1128 17:11:51.083575 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2" podStartSLOduration=15.8852503 podStartE2EDuration="33.08355201s" podCreationTimestamp="2025-11-28 17:11:18 +0000 UTC" firstStartedPulling="2025-11-28 17:11:33.025499888 +0000 UTC m=+795.074420793" lastFinishedPulling="2025-11-28 17:11:50.223801598 +0000 UTC m=+812.272722503" observedRunningTime="2025-11-28 17:11:51.079052468 +0000 UTC m=+813.127973393" watchObservedRunningTime="2025-11-28 17:11:51.08355201 +0000 UTC m=+813.132472915" Nov 28 17:11:51 crc kubenswrapper[5024]: I1128 17:11:51.097555 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" Nov 28 17:11:51 crc kubenswrapper[5024]: I1128 17:11:51.117824 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-z5jkn" podStartSLOduration=15.936684436 podStartE2EDuration="33.117795489s" podCreationTimestamp="2025-11-28 17:11:18 +0000 UTC" firstStartedPulling="2025-11-28 17:11:33.045962101 +0000 UTC m=+795.094883006" lastFinishedPulling="2025-11-28 17:11:50.227073134 +0000 UTC m=+812.275994059" observedRunningTime="2025-11-28 17:11:51.110749671 +0000 UTC m=+813.159670586" watchObservedRunningTime="2025-11-28 17:11:51.117795489 +0000 UTC m=+813.166716394" Nov 28 17:11:51 crc kubenswrapper[5024]: I1128 17:11:51.135463 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-h25jc" podStartSLOduration=18.691744199 podStartE2EDuration="33.135436228s" podCreationTimestamp="2025-11-28 17:11:18 +0000 UTC" firstStartedPulling="2025-11-28 17:11:35.819933702 +0000 UTC m=+797.868854607" lastFinishedPulling="2025-11-28 17:11:50.263625691 +0000 UTC m=+812.312546636" observedRunningTime="2025-11-28 17:11:51.130138822 +0000 UTC m=+813.179059727" watchObservedRunningTime="2025-11-28 17:11:51.135436228 +0000 UTC m=+813.184357143" Nov 28 17:11:51 crc kubenswrapper[5024]: I1128 17:11:51.172584 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" podStartSLOduration=16.922330738 podStartE2EDuration="32.172556882s" podCreationTimestamp="2025-11-28 17:11:19 +0000 UTC" firstStartedPulling="2025-11-28 17:11:35.004752652 +0000 UTC m=+797.053673557" lastFinishedPulling="2025-11-28 17:11:50.254978796 +0000 UTC m=+812.303899701" observedRunningTime="2025-11-28 17:11:51.154079397 +0000 UTC m=+813.203000302" watchObservedRunningTime="2025-11-28 17:11:51.172556882 +0000 UTC m=+813.221477787" Nov 28 17:11:51 crc kubenswrapper[5024]: I1128 17:11:51.174257 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc" podStartSLOduration=17.829905516 podStartE2EDuration="33.174247222s" podCreationTimestamp="2025-11-28 17:11:18 +0000 UTC" firstStartedPulling="2025-11-28 17:11:34.900472401 +0000 UTC m=+796.949393306" lastFinishedPulling="2025-11-28 17:11:50.244814107 +0000 UTC m=+812.293735012" observedRunningTime="2025-11-28 17:11:51.169087549 +0000 UTC m=+813.218008455" watchObservedRunningTime="2025-11-28 17:11:51.174247222 +0000 UTC m=+813.223168127" Nov 28 17:11:58 crc kubenswrapper[5024]: I1128 17:11:58.942707 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-l8xp5"] Nov 28 17:11:58 crc kubenswrapper[5024]: I1128 17:11:58.944530 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-l8xp5" Nov 28 17:11:58 crc kubenswrapper[5024]: I1128 17:11:58.947346 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 28 17:11:58 crc kubenswrapper[5024]: I1128 17:11:58.947472 5024 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-xtrdd" Nov 28 17:11:58 crc kubenswrapper[5024]: I1128 17:11:58.947378 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 28 17:11:58 crc kubenswrapper[5024]: I1128 17:11:58.956555 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-v2rnd"] Nov 28 17:11:58 crc kubenswrapper[5024]: I1128 17:11:58.957823 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-v2rnd" Nov 28 17:11:58 crc kubenswrapper[5024]: I1128 17:11:58.965789 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-l8xp5"] Nov 28 17:11:58 crc kubenswrapper[5024]: I1128 17:11:58.973681 5024 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-ddbr7" Nov 28 17:11:58 crc kubenswrapper[5024]: I1128 17:11:58.982330 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l7mzh"] Nov 28 17:11:58 crc kubenswrapper[5024]: I1128 17:11:58.983895 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-l7mzh" Nov 28 17:11:58 crc kubenswrapper[5024]: I1128 17:11:58.987611 5024 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-xmtfr" Nov 28 17:11:58 crc kubenswrapper[5024]: I1128 17:11:58.992933 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-v2rnd"] Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.014793 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l7mzh"] Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.105997 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtm62\" (UniqueName: \"kubernetes.io/projected/c469de85-5ad7-4f96-9db9-d4db161236d9-kube-api-access-gtm62\") pod \"cert-manager-cainjector-7f985d654d-v2rnd\" (UID: \"c469de85-5ad7-4f96-9db9-d4db161236d9\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-v2rnd" Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.106087 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gplq6\" (UniqueName: \"kubernetes.io/projected/184af68b-5dc9-41ec-b2fc-11ea0e1cb8ac-kube-api-access-gplq6\") pod \"cert-manager-webhook-5655c58dd6-l7mzh\" (UID: \"184af68b-5dc9-41ec-b2fc-11ea0e1cb8ac\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l7mzh" Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.106119 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq6mb\" (UniqueName: \"kubernetes.io/projected/57f11f62-daab-4268-9107-f97095a8cc24-kube-api-access-rq6mb\") pod \"cert-manager-5b446d88c5-l8xp5\" (UID: \"57f11f62-daab-4268-9107-f97095a8cc24\") " pod="cert-manager/cert-manager-5b446d88c5-l8xp5" Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.207775 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq6mb\" (UniqueName: \"kubernetes.io/projected/57f11f62-daab-4268-9107-f97095a8cc24-kube-api-access-rq6mb\") pod \"cert-manager-5b446d88c5-l8xp5\" (UID: \"57f11f62-daab-4268-9107-f97095a8cc24\") " pod="cert-manager/cert-manager-5b446d88c5-l8xp5" Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.207997 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtm62\" (UniqueName: \"kubernetes.io/projected/c469de85-5ad7-4f96-9db9-d4db161236d9-kube-api-access-gtm62\") pod \"cert-manager-cainjector-7f985d654d-v2rnd\" (UID: \"c469de85-5ad7-4f96-9db9-d4db161236d9\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-v2rnd" Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.208088 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gplq6\" (UniqueName: \"kubernetes.io/projected/184af68b-5dc9-41ec-b2fc-11ea0e1cb8ac-kube-api-access-gplq6\") pod \"cert-manager-webhook-5655c58dd6-l7mzh\" (UID: \"184af68b-5dc9-41ec-b2fc-11ea0e1cb8ac\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l7mzh" Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.229815 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq6mb\" (UniqueName: \"kubernetes.io/projected/57f11f62-daab-4268-9107-f97095a8cc24-kube-api-access-rq6mb\") pod \"cert-manager-5b446d88c5-l8xp5\" (UID: \"57f11f62-daab-4268-9107-f97095a8cc24\") " pod="cert-manager/cert-manager-5b446d88c5-l8xp5" Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.230447 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtm62\" (UniqueName: \"kubernetes.io/projected/c469de85-5ad7-4f96-9db9-d4db161236d9-kube-api-access-gtm62\") pod \"cert-manager-cainjector-7f985d654d-v2rnd\" (UID: \"c469de85-5ad7-4f96-9db9-d4db161236d9\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-v2rnd" Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.231955 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gplq6\" (UniqueName: \"kubernetes.io/projected/184af68b-5dc9-41ec-b2fc-11ea0e1cb8ac-kube-api-access-gplq6\") pod \"cert-manager-webhook-5655c58dd6-l7mzh\" (UID: \"184af68b-5dc9-41ec-b2fc-11ea0e1cb8ac\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l7mzh" Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.279663 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-l8xp5" Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.287963 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-v2rnd" Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.309131 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-l7mzh" Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.512295 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-7l9j5" Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.575999 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-v2rnd"] Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.846673 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l7mzh"] Nov 28 17:11:59 crc kubenswrapper[5024]: W1128 17:11:59.848910 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod184af68b_5dc9_41ec_b2fc_11ea0e1cb8ac.slice/crio-8458aa34278d5ace34feb45c7b36d26751f1a73e293760051199217a44b574a5 WatchSource:0}: Error finding container 8458aa34278d5ace34feb45c7b36d26751f1a73e293760051199217a44b574a5: Status 404 returned error can't find the container with id 8458aa34278d5ace34feb45c7b36d26751f1a73e293760051199217a44b574a5 Nov 28 17:11:59 crc kubenswrapper[5024]: W1128 17:11:59.865108 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57f11f62_daab_4268_9107_f97095a8cc24.slice/crio-2cb7969da2bb86597e7badcbab7d9ebf31b79ce40ad980b60bf6a3e108a8cd22 WatchSource:0}: Error finding container 2cb7969da2bb86597e7badcbab7d9ebf31b79ce40ad980b60bf6a3e108a8cd22: Status 404 returned error can't find the container with id 2cb7969da2bb86597e7badcbab7d9ebf31b79ce40ad980b60bf6a3e108a8cd22 Nov 28 17:11:59 crc kubenswrapper[5024]: I1128 17:11:59.868086 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-l8xp5"] Nov 28 17:12:00 crc kubenswrapper[5024]: I1128 17:12:00.126401 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-v2rnd" event={"ID":"c469de85-5ad7-4f96-9db9-d4db161236d9","Type":"ContainerStarted","Data":"95a6570bf6e82fa0c5846301c4d6d9ba31ff4b45b4b940e4213524c766d7e8b1"} Nov 28 17:12:00 crc kubenswrapper[5024]: I1128 17:12:00.127821 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-l8xp5" event={"ID":"57f11f62-daab-4268-9107-f97095a8cc24","Type":"ContainerStarted","Data":"2cb7969da2bb86597e7badcbab7d9ebf31b79ce40ad980b60bf6a3e108a8cd22"} Nov 28 17:12:00 crc kubenswrapper[5024]: I1128 17:12:00.129243 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-l7mzh" event={"ID":"184af68b-5dc9-41ec-b2fc-11ea0e1cb8ac","Type":"ContainerStarted","Data":"8458aa34278d5ace34feb45c7b36d26751f1a73e293760051199217a44b574a5"} Nov 28 17:12:07 crc kubenswrapper[5024]: I1128 17:12:07.295367 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-v2rnd" event={"ID":"c469de85-5ad7-4f96-9db9-d4db161236d9","Type":"ContainerStarted","Data":"0c2b38d4d07d37b539c9785d27e1fe4253d714712787f4ddb4090f46e3f654b2"} Nov 28 17:12:07 crc kubenswrapper[5024]: I1128 17:12:07.297594 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-l8xp5" event={"ID":"57f11f62-daab-4268-9107-f97095a8cc24","Type":"ContainerStarted","Data":"81ea50fb2c6ce1103d675faf6ec7e4a3e0e18c3e77f11e013786d26251fd5207"} Nov 28 17:12:07 crc kubenswrapper[5024]: I1128 17:12:07.299007 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-l7mzh" event={"ID":"184af68b-5dc9-41ec-b2fc-11ea0e1cb8ac","Type":"ContainerStarted","Data":"82fef1851e54d6e958a3f99b00b0e6bf84a687db479860873d1a4f1cf1290380"} Nov 28 17:12:07 crc kubenswrapper[5024]: I1128 17:12:07.299135 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-l7mzh" Nov 28 17:12:07 crc kubenswrapper[5024]: I1128 17:12:07.347676 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-v2rnd" podStartSLOduration=3.041570262 podStartE2EDuration="9.347613514s" podCreationTimestamp="2025-11-28 17:11:58 +0000 UTC" firstStartedPulling="2025-11-28 17:11:59.600066566 +0000 UTC m=+821.648987471" lastFinishedPulling="2025-11-28 17:12:05.906109808 +0000 UTC m=+827.955030723" observedRunningTime="2025-11-28 17:12:07.319576348 +0000 UTC m=+829.368497263" watchObservedRunningTime="2025-11-28 17:12:07.347613514 +0000 UTC m=+829.396534429" Nov 28 17:12:07 crc kubenswrapper[5024]: I1128 17:12:07.373821 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-l8xp5" podStartSLOduration=3.256721079 podStartE2EDuration="9.373782935s" podCreationTimestamp="2025-11-28 17:11:58 +0000 UTC" firstStartedPulling="2025-11-28 17:11:59.867810382 +0000 UTC m=+821.916731287" lastFinishedPulling="2025-11-28 17:12:05.984872238 +0000 UTC m=+828.033793143" observedRunningTime="2025-11-28 17:12:07.336150967 +0000 UTC m=+829.385071872" watchObservedRunningTime="2025-11-28 17:12:07.373782935 +0000 UTC m=+829.422703840" Nov 28 17:12:07 crc kubenswrapper[5024]: I1128 17:12:07.380488 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-l7mzh" podStartSLOduration=3.329511113 podStartE2EDuration="9.380468692s" podCreationTimestamp="2025-11-28 17:11:58 +0000 UTC" firstStartedPulling="2025-11-28 17:11:59.855139359 +0000 UTC m=+821.904060264" lastFinishedPulling="2025-11-28 17:12:05.906096928 +0000 UTC m=+827.955017843" observedRunningTime="2025-11-28 17:12:07.360666619 +0000 UTC m=+829.409587524" watchObservedRunningTime="2025-11-28 17:12:07.380468692 +0000 UTC m=+829.429389597" Nov 28 17:12:07 crc kubenswrapper[5024]: I1128 17:12:07.565069 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:12:07 crc kubenswrapper[5024]: I1128 17:12:07.565181 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:12:07 crc kubenswrapper[5024]: I1128 17:12:07.565264 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 17:12:07 crc kubenswrapper[5024]: I1128 17:12:07.566932 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b519f9b78edbf9b228fc85037669f9ab174eddbe4b594ce06b779c1bf0c5cf3c"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:12:07 crc kubenswrapper[5024]: I1128 17:12:07.567078 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://b519f9b78edbf9b228fc85037669f9ab174eddbe4b594ce06b779c1bf0c5cf3c" gracePeriod=600 Nov 28 17:12:08 crc kubenswrapper[5024]: I1128 17:12:08.309301 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="b519f9b78edbf9b228fc85037669f9ab174eddbe4b594ce06b779c1bf0c5cf3c" exitCode=0 Nov 28 17:12:08 crc kubenswrapper[5024]: I1128 17:12:08.309395 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"b519f9b78edbf9b228fc85037669f9ab174eddbe4b594ce06b779c1bf0c5cf3c"} Nov 28 17:12:08 crc kubenswrapper[5024]: I1128 17:12:08.310418 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"88f26a0a596a708c394834d35e939b4bff9c97e9c07da03ec569d30bef11bf70"} Nov 28 17:12:08 crc kubenswrapper[5024]: I1128 17:12:08.310454 5024 scope.go:117] "RemoveContainer" containerID="b2b8407cc3bf17902050626002a98c22963b96352f4dad4e0be00a881d87b638" Nov 28 17:12:14 crc kubenswrapper[5024]: I1128 17:12:14.312915 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-l7mzh" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.285002 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt"] Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.287273 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.291961 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.298994 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b90e9055-da41-4e44-b546-6b1de6fd44eb-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt\" (UID: \"b90e9055-da41-4e44-b546-6b1de6fd44eb\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.299355 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b90e9055-da41-4e44-b546-6b1de6fd44eb-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt\" (UID: \"b90e9055-da41-4e44-b546-6b1de6fd44eb\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.299533 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glxvr\" (UniqueName: \"kubernetes.io/projected/b90e9055-da41-4e44-b546-6b1de6fd44eb-kube-api-access-glxvr\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt\" (UID: \"b90e9055-da41-4e44-b546-6b1de6fd44eb\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.306592 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt"] Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.402256 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b90e9055-da41-4e44-b546-6b1de6fd44eb-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt\" (UID: \"b90e9055-da41-4e44-b546-6b1de6fd44eb\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.402702 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glxvr\" (UniqueName: \"kubernetes.io/projected/b90e9055-da41-4e44-b546-6b1de6fd44eb-kube-api-access-glxvr\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt\" (UID: \"b90e9055-da41-4e44-b546-6b1de6fd44eb\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.402871 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b90e9055-da41-4e44-b546-6b1de6fd44eb-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt\" (UID: \"b90e9055-da41-4e44-b546-6b1de6fd44eb\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.402910 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b90e9055-da41-4e44-b546-6b1de6fd44eb-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt\" (UID: \"b90e9055-da41-4e44-b546-6b1de6fd44eb\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.403239 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b90e9055-da41-4e44-b546-6b1de6fd44eb-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt\" (UID: \"b90e9055-da41-4e44-b546-6b1de6fd44eb\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.427884 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glxvr\" (UniqueName: \"kubernetes.io/projected/b90e9055-da41-4e44-b546-6b1de6fd44eb-kube-api-access-glxvr\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt\" (UID: \"b90e9055-da41-4e44-b546-6b1de6fd44eb\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.608907 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.642781 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g"] Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.650899 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.654264 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g"] Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.706944 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcngw\" (UniqueName: \"kubernetes.io/projected/a2f27c25-5fba-497d-ab04-88a773c09bf7-kube-api-access-rcngw\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g\" (UID: \"a2f27c25-5fba-497d-ab04-88a773c09bf7\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.707087 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f27c25-5fba-497d-ab04-88a773c09bf7-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g\" (UID: \"a2f27c25-5fba-497d-ab04-88a773c09bf7\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.707157 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f27c25-5fba-497d-ab04-88a773c09bf7-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g\" (UID: \"a2f27c25-5fba-497d-ab04-88a773c09bf7\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.808929 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcngw\" (UniqueName: \"kubernetes.io/projected/a2f27c25-5fba-497d-ab04-88a773c09bf7-kube-api-access-rcngw\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g\" (UID: \"a2f27c25-5fba-497d-ab04-88a773c09bf7\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.809508 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f27c25-5fba-497d-ab04-88a773c09bf7-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g\" (UID: \"a2f27c25-5fba-497d-ab04-88a773c09bf7\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.809558 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f27c25-5fba-497d-ab04-88a773c09bf7-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g\" (UID: \"a2f27c25-5fba-497d-ab04-88a773c09bf7\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.810747 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f27c25-5fba-497d-ab04-88a773c09bf7-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g\" (UID: \"a2f27c25-5fba-497d-ab04-88a773c09bf7\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.810931 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f27c25-5fba-497d-ab04-88a773c09bf7-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g\" (UID: \"a2f27c25-5fba-497d-ab04-88a773c09bf7\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.841401 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcngw\" (UniqueName: \"kubernetes.io/projected/a2f27c25-5fba-497d-ab04-88a773c09bf7-kube-api-access-rcngw\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g\" (UID: \"a2f27c25-5fba-497d-ab04-88a773c09bf7\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" Nov 28 17:12:42 crc kubenswrapper[5024]: I1128 17:12:42.916044 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt"] Nov 28 17:12:43 crc kubenswrapper[5024]: I1128 17:12:43.010653 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" Nov 28 17:12:43 crc kubenswrapper[5024]: I1128 17:12:43.290663 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g"] Nov 28 17:12:43 crc kubenswrapper[5024]: W1128 17:12:43.307522 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2f27c25_5fba_497d_ab04_88a773c09bf7.slice/crio-2c522ae743f400c7389db997fb87def052b4b81d2f6dfcde25cb3f17cbb299b9 WatchSource:0}: Error finding container 2c522ae743f400c7389db997fb87def052b4b81d2f6dfcde25cb3f17cbb299b9: Status 404 returned error can't find the container with id 2c522ae743f400c7389db997fb87def052b4b81d2f6dfcde25cb3f17cbb299b9 Nov 28 17:12:43 crc kubenswrapper[5024]: I1128 17:12:43.561887 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" event={"ID":"a2f27c25-5fba-497d-ab04-88a773c09bf7","Type":"ContainerStarted","Data":"53c2d9dbb75a1b0fbcb18caecab6a0721c7cd75fbfbef8a2f6047a6ee01a4bc9"} Nov 28 17:12:43 crc kubenswrapper[5024]: I1128 17:12:43.561950 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" event={"ID":"a2f27c25-5fba-497d-ab04-88a773c09bf7","Type":"ContainerStarted","Data":"2c522ae743f400c7389db997fb87def052b4b81d2f6dfcde25cb3f17cbb299b9"} Nov 28 17:12:43 crc kubenswrapper[5024]: I1128 17:12:43.564066 5024 generic.go:334] "Generic (PLEG): container finished" podID="b90e9055-da41-4e44-b546-6b1de6fd44eb" containerID="8ab3b690481dc6d953359be7a754ca7443fe50873d3057d7f0318e595f71985e" exitCode=0 Nov 28 17:12:43 crc kubenswrapper[5024]: I1128 17:12:43.564147 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" event={"ID":"b90e9055-da41-4e44-b546-6b1de6fd44eb","Type":"ContainerDied","Data":"8ab3b690481dc6d953359be7a754ca7443fe50873d3057d7f0318e595f71985e"} Nov 28 17:12:43 crc kubenswrapper[5024]: I1128 17:12:43.564192 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" event={"ID":"b90e9055-da41-4e44-b546-6b1de6fd44eb","Type":"ContainerStarted","Data":"19fa8e1859b9e0334e6bfd512aab3d29b8b3ed474c941ee796ca56ee4a791622"} Nov 28 17:12:44 crc kubenswrapper[5024]: I1128 17:12:44.572329 5024 generic.go:334] "Generic (PLEG): container finished" podID="a2f27c25-5fba-497d-ab04-88a773c09bf7" containerID="53c2d9dbb75a1b0fbcb18caecab6a0721c7cd75fbfbef8a2f6047a6ee01a4bc9" exitCode=0 Nov 28 17:12:44 crc kubenswrapper[5024]: I1128 17:12:44.572412 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" event={"ID":"a2f27c25-5fba-497d-ab04-88a773c09bf7","Type":"ContainerDied","Data":"53c2d9dbb75a1b0fbcb18caecab6a0721c7cd75fbfbef8a2f6047a6ee01a4bc9"} Nov 28 17:12:45 crc kubenswrapper[5024]: I1128 17:12:45.586250 5024 generic.go:334] "Generic (PLEG): container finished" podID="b90e9055-da41-4e44-b546-6b1de6fd44eb" containerID="39681d41e21c4f1df03e117719982f9ef2b1d26ef00beac336518c0181db005f" exitCode=0 Nov 28 17:12:45 crc kubenswrapper[5024]: I1128 17:12:45.586681 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" event={"ID":"b90e9055-da41-4e44-b546-6b1de6fd44eb","Type":"ContainerDied","Data":"39681d41e21c4f1df03e117719982f9ef2b1d26ef00beac336518c0181db005f"} Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.000879 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vfvp9"] Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.003696 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.032234 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vfvp9"] Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.178188 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10aa5d41-7d59-435a-b4ca-97ad5dac5029-utilities\") pod \"redhat-operators-vfvp9\" (UID: \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\") " pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.178235 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndrcx\" (UniqueName: \"kubernetes.io/projected/10aa5d41-7d59-435a-b4ca-97ad5dac5029-kube-api-access-ndrcx\") pod \"redhat-operators-vfvp9\" (UID: \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\") " pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.178304 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10aa5d41-7d59-435a-b4ca-97ad5dac5029-catalog-content\") pod \"redhat-operators-vfvp9\" (UID: \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\") " pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.279634 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10aa5d41-7d59-435a-b4ca-97ad5dac5029-utilities\") pod \"redhat-operators-vfvp9\" (UID: \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\") " pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.279690 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndrcx\" (UniqueName: \"kubernetes.io/projected/10aa5d41-7d59-435a-b4ca-97ad5dac5029-kube-api-access-ndrcx\") pod \"redhat-operators-vfvp9\" (UID: \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\") " pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.279755 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10aa5d41-7d59-435a-b4ca-97ad5dac5029-catalog-content\") pod \"redhat-operators-vfvp9\" (UID: \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\") " pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.280502 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10aa5d41-7d59-435a-b4ca-97ad5dac5029-utilities\") pod \"redhat-operators-vfvp9\" (UID: \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\") " pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.280524 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10aa5d41-7d59-435a-b4ca-97ad5dac5029-catalog-content\") pod \"redhat-operators-vfvp9\" (UID: \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\") " pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.302243 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndrcx\" (UniqueName: \"kubernetes.io/projected/10aa5d41-7d59-435a-b4ca-97ad5dac5029-kube-api-access-ndrcx\") pod \"redhat-operators-vfvp9\" (UID: \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\") " pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.321297 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.562772 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vfvp9"] Nov 28 17:12:46 crc kubenswrapper[5024]: W1128 17:12:46.569802 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10aa5d41_7d59_435a_b4ca_97ad5dac5029.slice/crio-98460b9913953d0851b904e0fab449588439fc71caa6527fec75b105a479aea1 WatchSource:0}: Error finding container 98460b9913953d0851b904e0fab449588439fc71caa6527fec75b105a479aea1: Status 404 returned error can't find the container with id 98460b9913953d0851b904e0fab449588439fc71caa6527fec75b105a479aea1 Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.598859 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vfvp9" event={"ID":"10aa5d41-7d59-435a-b4ca-97ad5dac5029","Type":"ContainerStarted","Data":"98460b9913953d0851b904e0fab449588439fc71caa6527fec75b105a479aea1"} Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.606216 5024 generic.go:334] "Generic (PLEG): container finished" podID="b90e9055-da41-4e44-b546-6b1de6fd44eb" containerID="068e9f04e2ac32d7c0701d086317d1569f526f32053f55bf391028f19c2619d7" exitCode=0 Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.606286 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" event={"ID":"b90e9055-da41-4e44-b546-6b1de6fd44eb","Type":"ContainerDied","Data":"068e9f04e2ac32d7c0701d086317d1569f526f32053f55bf391028f19c2619d7"} Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.608967 5024 generic.go:334] "Generic (PLEG): container finished" podID="a2f27c25-5fba-497d-ab04-88a773c09bf7" containerID="31e322007797d1835892cf2dee4fe151262b7cd5b12b9f8978b6b9c38cc116d5" exitCode=0 Nov 28 17:12:46 crc kubenswrapper[5024]: I1128 17:12:46.609039 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" event={"ID":"a2f27c25-5fba-497d-ab04-88a773c09bf7","Type":"ContainerDied","Data":"31e322007797d1835892cf2dee4fe151262b7cd5b12b9f8978b6b9c38cc116d5"} Nov 28 17:12:47 crc kubenswrapper[5024]: I1128 17:12:47.623351 5024 generic.go:334] "Generic (PLEG): container finished" podID="10aa5d41-7d59-435a-b4ca-97ad5dac5029" containerID="5b042d9b890e6c1102928f0c54992d4fe2fa994efb07814a895a4426a4f03609" exitCode=0 Nov 28 17:12:47 crc kubenswrapper[5024]: I1128 17:12:47.623440 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vfvp9" event={"ID":"10aa5d41-7d59-435a-b4ca-97ad5dac5029","Type":"ContainerDied","Data":"5b042d9b890e6c1102928f0c54992d4fe2fa994efb07814a895a4426a4f03609"} Nov 28 17:12:47 crc kubenswrapper[5024]: I1128 17:12:47.626405 5024 generic.go:334] "Generic (PLEG): container finished" podID="a2f27c25-5fba-497d-ab04-88a773c09bf7" containerID="05931d3be9b5c48af2b36137ee0c808ecaea6b3cab977abd30ffe322b2d2915f" exitCode=0 Nov 28 17:12:47 crc kubenswrapper[5024]: I1128 17:12:47.626466 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" event={"ID":"a2f27c25-5fba-497d-ab04-88a773c09bf7","Type":"ContainerDied","Data":"05931d3be9b5c48af2b36137ee0c808ecaea6b3cab977abd30ffe322b2d2915f"} Nov 28 17:12:47 crc kubenswrapper[5024]: I1128 17:12:47.900760 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" Nov 28 17:12:48 crc kubenswrapper[5024]: I1128 17:12:48.008090 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glxvr\" (UniqueName: \"kubernetes.io/projected/b90e9055-da41-4e44-b546-6b1de6fd44eb-kube-api-access-glxvr\") pod \"b90e9055-da41-4e44-b546-6b1de6fd44eb\" (UID: \"b90e9055-da41-4e44-b546-6b1de6fd44eb\") " Nov 28 17:12:48 crc kubenswrapper[5024]: I1128 17:12:48.008165 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b90e9055-da41-4e44-b546-6b1de6fd44eb-bundle\") pod \"b90e9055-da41-4e44-b546-6b1de6fd44eb\" (UID: \"b90e9055-da41-4e44-b546-6b1de6fd44eb\") " Nov 28 17:12:48 crc kubenswrapper[5024]: I1128 17:12:48.008291 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b90e9055-da41-4e44-b546-6b1de6fd44eb-util\") pod \"b90e9055-da41-4e44-b546-6b1de6fd44eb\" (UID: \"b90e9055-da41-4e44-b546-6b1de6fd44eb\") " Nov 28 17:12:48 crc kubenswrapper[5024]: I1128 17:12:48.011343 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b90e9055-da41-4e44-b546-6b1de6fd44eb-bundle" (OuterVolumeSpecName: "bundle") pod "b90e9055-da41-4e44-b546-6b1de6fd44eb" (UID: "b90e9055-da41-4e44-b546-6b1de6fd44eb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:12:48 crc kubenswrapper[5024]: I1128 17:12:48.019816 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b90e9055-da41-4e44-b546-6b1de6fd44eb-kube-api-access-glxvr" (OuterVolumeSpecName: "kube-api-access-glxvr") pod "b90e9055-da41-4e44-b546-6b1de6fd44eb" (UID: "b90e9055-da41-4e44-b546-6b1de6fd44eb"). InnerVolumeSpecName "kube-api-access-glxvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:12:48 crc kubenswrapper[5024]: I1128 17:12:48.022982 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b90e9055-da41-4e44-b546-6b1de6fd44eb-util" (OuterVolumeSpecName: "util") pod "b90e9055-da41-4e44-b546-6b1de6fd44eb" (UID: "b90e9055-da41-4e44-b546-6b1de6fd44eb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:12:48 crc kubenswrapper[5024]: I1128 17:12:48.110720 5024 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b90e9055-da41-4e44-b546-6b1de6fd44eb-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:48 crc kubenswrapper[5024]: I1128 17:12:48.110802 5024 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b90e9055-da41-4e44-b546-6b1de6fd44eb-util\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:48 crc kubenswrapper[5024]: I1128 17:12:48.110827 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glxvr\" (UniqueName: \"kubernetes.io/projected/b90e9055-da41-4e44-b546-6b1de6fd44eb-kube-api-access-glxvr\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:48 crc kubenswrapper[5024]: I1128 17:12:48.636120 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" Nov 28 17:12:48 crc kubenswrapper[5024]: I1128 17:12:48.637243 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt" event={"ID":"b90e9055-da41-4e44-b546-6b1de6fd44eb","Type":"ContainerDied","Data":"19fa8e1859b9e0334e6bfd512aab3d29b8b3ed474c941ee796ca56ee4a791622"} Nov 28 17:12:48 crc kubenswrapper[5024]: I1128 17:12:48.637278 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19fa8e1859b9e0334e6bfd512aab3d29b8b3ed474c941ee796ca56ee4a791622" Nov 28 17:12:48 crc kubenswrapper[5024]: I1128 17:12:48.927710 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" Nov 28 17:12:49 crc kubenswrapper[5024]: I1128 17:12:49.133535 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f27c25-5fba-497d-ab04-88a773c09bf7-bundle\") pod \"a2f27c25-5fba-497d-ab04-88a773c09bf7\" (UID: \"a2f27c25-5fba-497d-ab04-88a773c09bf7\") " Nov 28 17:12:49 crc kubenswrapper[5024]: I1128 17:12:49.134117 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f27c25-5fba-497d-ab04-88a773c09bf7-util\") pod \"a2f27c25-5fba-497d-ab04-88a773c09bf7\" (UID: \"a2f27c25-5fba-497d-ab04-88a773c09bf7\") " Nov 28 17:12:49 crc kubenswrapper[5024]: I1128 17:12:49.134307 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcngw\" (UniqueName: \"kubernetes.io/projected/a2f27c25-5fba-497d-ab04-88a773c09bf7-kube-api-access-rcngw\") pod \"a2f27c25-5fba-497d-ab04-88a773c09bf7\" (UID: \"a2f27c25-5fba-497d-ab04-88a773c09bf7\") " Nov 28 17:12:49 crc kubenswrapper[5024]: I1128 17:12:49.136321 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f27c25-5fba-497d-ab04-88a773c09bf7-bundle" (OuterVolumeSpecName: "bundle") pod "a2f27c25-5fba-497d-ab04-88a773c09bf7" (UID: "a2f27c25-5fba-497d-ab04-88a773c09bf7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:12:49 crc kubenswrapper[5024]: I1128 17:12:49.143013 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f27c25-5fba-497d-ab04-88a773c09bf7-kube-api-access-rcngw" (OuterVolumeSpecName: "kube-api-access-rcngw") pod "a2f27c25-5fba-497d-ab04-88a773c09bf7" (UID: "a2f27c25-5fba-497d-ab04-88a773c09bf7"). InnerVolumeSpecName "kube-api-access-rcngw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:12:49 crc kubenswrapper[5024]: I1128 17:12:49.152266 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f27c25-5fba-497d-ab04-88a773c09bf7-util" (OuterVolumeSpecName: "util") pod "a2f27c25-5fba-497d-ab04-88a773c09bf7" (UID: "a2f27c25-5fba-497d-ab04-88a773c09bf7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:12:49 crc kubenswrapper[5024]: I1128 17:12:49.236377 5024 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f27c25-5fba-497d-ab04-88a773c09bf7-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:49 crc kubenswrapper[5024]: I1128 17:12:49.236428 5024 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f27c25-5fba-497d-ab04-88a773c09bf7-util\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:49 crc kubenswrapper[5024]: I1128 17:12:49.236439 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcngw\" (UniqueName: \"kubernetes.io/projected/a2f27c25-5fba-497d-ab04-88a773c09bf7-kube-api-access-rcngw\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:49 crc kubenswrapper[5024]: I1128 17:12:49.645489 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" event={"ID":"a2f27c25-5fba-497d-ab04-88a773c09bf7","Type":"ContainerDied","Data":"2c522ae743f400c7389db997fb87def052b4b81d2f6dfcde25cb3f17cbb299b9"} Nov 28 17:12:49 crc kubenswrapper[5024]: I1128 17:12:49.646167 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c522ae743f400c7389db997fb87def052b4b81d2f6dfcde25cb3f17cbb299b9" Nov 28 17:12:49 crc kubenswrapper[5024]: I1128 17:12:49.645734 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g" Nov 28 17:12:49 crc kubenswrapper[5024]: I1128 17:12:49.660447 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vfvp9" event={"ID":"10aa5d41-7d59-435a-b4ca-97ad5dac5029","Type":"ContainerStarted","Data":"e45ff47f068a3fdef7049d1e016d095f49c0a31d338bfb4402799c1601435ee7"} Nov 28 17:12:50 crc kubenswrapper[5024]: I1128 17:12:50.668439 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vfvp9" event={"ID":"10aa5d41-7d59-435a-b4ca-97ad5dac5029","Type":"ContainerDied","Data":"e45ff47f068a3fdef7049d1e016d095f49c0a31d338bfb4402799c1601435ee7"} Nov 28 17:12:50 crc kubenswrapper[5024]: I1128 17:12:50.668276 5024 generic.go:334] "Generic (PLEG): container finished" podID="10aa5d41-7d59-435a-b4ca-97ad5dac5029" containerID="e45ff47f068a3fdef7049d1e016d095f49c0a31d338bfb4402799c1601435ee7" exitCode=0 Nov 28 17:12:51 crc kubenswrapper[5024]: I1128 17:12:51.678638 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vfvp9" event={"ID":"10aa5d41-7d59-435a-b4ca-97ad5dac5029","Type":"ContainerStarted","Data":"5b5763f1896e6417af1668f009574fc35b78371220f891d5b179377b6f39ff56"} Nov 28 17:12:51 crc kubenswrapper[5024]: I1128 17:12:51.698509 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vfvp9" podStartSLOduration=3.030230686 podStartE2EDuration="6.698480317s" podCreationTimestamp="2025-11-28 17:12:45 +0000 UTC" firstStartedPulling="2025-11-28 17:12:47.625741993 +0000 UTC m=+869.674662898" lastFinishedPulling="2025-11-28 17:12:51.293991624 +0000 UTC m=+873.342912529" observedRunningTime="2025-11-28 17:12:51.698207999 +0000 UTC m=+873.747128904" watchObservedRunningTime="2025-11-28 17:12:51.698480317 +0000 UTC m=+873.747401222" Nov 28 17:12:56 crc kubenswrapper[5024]: I1128 17:12:56.321548 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:12:56 crc kubenswrapper[5024]: I1128 17:12:56.321826 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:12:57 crc kubenswrapper[5024]: I1128 17:12:57.365500 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vfvp9" podUID="10aa5d41-7d59-435a-b4ca-97ad5dac5029" containerName="registry-server" probeResult="failure" output=< Nov 28 17:12:57 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 17:12:57 crc kubenswrapper[5024]: > Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.418412 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482"] Nov 28 17:12:58 crc kubenswrapper[5024]: E1128 17:12:58.418695 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f27c25-5fba-497d-ab04-88a773c09bf7" containerName="pull" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.418709 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f27c25-5fba-497d-ab04-88a773c09bf7" containerName="pull" Nov 28 17:12:58 crc kubenswrapper[5024]: E1128 17:12:58.418720 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f27c25-5fba-497d-ab04-88a773c09bf7" containerName="util" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.418726 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f27c25-5fba-497d-ab04-88a773c09bf7" containerName="util" Nov 28 17:12:58 crc kubenswrapper[5024]: E1128 17:12:58.418739 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90e9055-da41-4e44-b546-6b1de6fd44eb" containerName="extract" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.418745 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90e9055-da41-4e44-b546-6b1de6fd44eb" containerName="extract" Nov 28 17:12:58 crc kubenswrapper[5024]: E1128 17:12:58.418755 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90e9055-da41-4e44-b546-6b1de6fd44eb" containerName="pull" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.418761 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90e9055-da41-4e44-b546-6b1de6fd44eb" containerName="pull" Nov 28 17:12:58 crc kubenswrapper[5024]: E1128 17:12:58.418772 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f27c25-5fba-497d-ab04-88a773c09bf7" containerName="extract" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.418778 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f27c25-5fba-497d-ab04-88a773c09bf7" containerName="extract" Nov 28 17:12:58 crc kubenswrapper[5024]: E1128 17:12:58.418789 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b90e9055-da41-4e44-b546-6b1de6fd44eb" containerName="util" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.418795 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b90e9055-da41-4e44-b546-6b1de6fd44eb" containerName="util" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.418900 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f27c25-5fba-497d-ab04-88a773c09bf7" containerName="extract" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.418919 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b90e9055-da41-4e44-b546-6b1de6fd44eb" containerName="extract" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.419602 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.427639 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.443760 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.457828 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.458886 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-clqbn" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.459108 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.459249 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.483462 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482"] Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.624399 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/f89c9ab8-a552-4228-9dbc-2af4129a1be3-manager-config\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.624507 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2pzh\" (UniqueName: \"kubernetes.io/projected/f89c9ab8-a552-4228-9dbc-2af4129a1be3-kube-api-access-v2pzh\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.624554 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f89c9ab8-a552-4228-9dbc-2af4129a1be3-apiservice-cert\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.624591 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f89c9ab8-a552-4228-9dbc-2af4129a1be3-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.624617 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f89c9ab8-a552-4228-9dbc-2af4129a1be3-webhook-cert\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.725877 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2pzh\" (UniqueName: \"kubernetes.io/projected/f89c9ab8-a552-4228-9dbc-2af4129a1be3-kube-api-access-v2pzh\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.726246 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f89c9ab8-a552-4228-9dbc-2af4129a1be3-apiservice-cert\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.726349 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f89c9ab8-a552-4228-9dbc-2af4129a1be3-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.726432 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f89c9ab8-a552-4228-9dbc-2af4129a1be3-webhook-cert\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.726536 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/f89c9ab8-a552-4228-9dbc-2af4129a1be3-manager-config\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.727581 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/f89c9ab8-a552-4228-9dbc-2af4129a1be3-manager-config\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.733468 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f89c9ab8-a552-4228-9dbc-2af4129a1be3-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.734317 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f89c9ab8-a552-4228-9dbc-2af4129a1be3-apiservice-cert\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.742673 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f89c9ab8-a552-4228-9dbc-2af4129a1be3-webhook-cert\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.747199 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2pzh\" (UniqueName: \"kubernetes.io/projected/f89c9ab8-a552-4228-9dbc-2af4129a1be3-kube-api-access-v2pzh\") pod \"loki-operator-controller-manager-d7f585bbf-gt482\" (UID: \"f89c9ab8-a552-4228-9dbc-2af4129a1be3\") " pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:58 crc kubenswrapper[5024]: I1128 17:12:58.768998 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:12:59 crc kubenswrapper[5024]: I1128 17:12:59.136451 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482"] Nov 28 17:12:59 crc kubenswrapper[5024]: W1128 17:12:59.139203 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf89c9ab8_a552_4228_9dbc_2af4129a1be3.slice/crio-aa6d59c79c5dac91491c6bdcd55d9fd2bf8ddca49b47d3f676dfb0a17d1f1769 WatchSource:0}: Error finding container aa6d59c79c5dac91491c6bdcd55d9fd2bf8ddca49b47d3f676dfb0a17d1f1769: Status 404 returned error can't find the container with id aa6d59c79c5dac91491c6bdcd55d9fd2bf8ddca49b47d3f676dfb0a17d1f1769 Nov 28 17:12:59 crc kubenswrapper[5024]: I1128 17:12:59.750815 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" event={"ID":"f89c9ab8-a552-4228-9dbc-2af4129a1be3","Type":"ContainerStarted","Data":"aa6d59c79c5dac91491c6bdcd55d9fd2bf8ddca49b47d3f676dfb0a17d1f1769"} Nov 28 17:13:02 crc kubenswrapper[5024]: I1128 17:13:02.705669 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-zdmd8"] Nov 28 17:13:02 crc kubenswrapper[5024]: I1128 17:13:02.710132 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-zdmd8" Nov 28 17:13:02 crc kubenswrapper[5024]: I1128 17:13:02.714362 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-h2hbp" Nov 28 17:13:02 crc kubenswrapper[5024]: I1128 17:13:02.714634 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Nov 28 17:13:02 crc kubenswrapper[5024]: I1128 17:13:02.714758 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Nov 28 17:13:02 crc kubenswrapper[5024]: I1128 17:13:02.721048 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-zdmd8"] Nov 28 17:13:02 crc kubenswrapper[5024]: I1128 17:13:02.865777 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgf46\" (UniqueName: \"kubernetes.io/projected/5a04dfeb-c7c2-443a-affd-11879c5e2b5d-kube-api-access-vgf46\") pod \"cluster-logging-operator-ff9846bd-zdmd8\" (UID: \"5a04dfeb-c7c2-443a-affd-11879c5e2b5d\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-zdmd8" Nov 28 17:13:02 crc kubenswrapper[5024]: I1128 17:13:02.967905 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgf46\" (UniqueName: \"kubernetes.io/projected/5a04dfeb-c7c2-443a-affd-11879c5e2b5d-kube-api-access-vgf46\") pod \"cluster-logging-operator-ff9846bd-zdmd8\" (UID: \"5a04dfeb-c7c2-443a-affd-11879c5e2b5d\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-zdmd8" Nov 28 17:13:02 crc kubenswrapper[5024]: I1128 17:13:02.999540 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgf46\" (UniqueName: \"kubernetes.io/projected/5a04dfeb-c7c2-443a-affd-11879c5e2b5d-kube-api-access-vgf46\") pod \"cluster-logging-operator-ff9846bd-zdmd8\" (UID: \"5a04dfeb-c7c2-443a-affd-11879c5e2b5d\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-zdmd8" Nov 28 17:13:03 crc kubenswrapper[5024]: I1128 17:13:03.042013 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-zdmd8" Nov 28 17:13:03 crc kubenswrapper[5024]: I1128 17:13:03.412163 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-zdmd8"] Nov 28 17:13:03 crc kubenswrapper[5024]: I1128 17:13:03.787112 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-zdmd8" event={"ID":"5a04dfeb-c7c2-443a-affd-11879c5e2b5d","Type":"ContainerStarted","Data":"de03bce3b26da96fcba9e90491377fdae1d955f09d71936f00c8f0f925cd47bf"} Nov 28 17:13:06 crc kubenswrapper[5024]: I1128 17:13:06.385903 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:13:06 crc kubenswrapper[5024]: I1128 17:13:06.434175 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:13:06 crc kubenswrapper[5024]: I1128 17:13:06.866610 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" event={"ID":"f89c9ab8-a552-4228-9dbc-2af4129a1be3","Type":"ContainerStarted","Data":"236d05be2429ee04425dca2908f46e6a388e7e306c5d0844cefafd4bc4bd98cf"} Nov 28 17:13:09 crc kubenswrapper[5024]: I1128 17:13:09.789454 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vfvp9"] Nov 28 17:13:09 crc kubenswrapper[5024]: I1128 17:13:09.790271 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vfvp9" podUID="10aa5d41-7d59-435a-b4ca-97ad5dac5029" containerName="registry-server" containerID="cri-o://5b5763f1896e6417af1668f009574fc35b78371220f891d5b179377b6f39ff56" gracePeriod=2 Nov 28 17:13:10 crc kubenswrapper[5024]: I1128 17:13:10.923295 5024 generic.go:334] "Generic (PLEG): container finished" podID="10aa5d41-7d59-435a-b4ca-97ad5dac5029" containerID="5b5763f1896e6417af1668f009574fc35b78371220f891d5b179377b6f39ff56" exitCode=0 Nov 28 17:13:10 crc kubenswrapper[5024]: I1128 17:13:10.923646 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vfvp9" event={"ID":"10aa5d41-7d59-435a-b4ca-97ad5dac5029","Type":"ContainerDied","Data":"5b5763f1896e6417af1668f009574fc35b78371220f891d5b179377b6f39ff56"} Nov 28 17:13:15 crc kubenswrapper[5024]: I1128 17:13:15.361197 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:13:15 crc kubenswrapper[5024]: I1128 17:13:15.538859 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndrcx\" (UniqueName: \"kubernetes.io/projected/10aa5d41-7d59-435a-b4ca-97ad5dac5029-kube-api-access-ndrcx\") pod \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\" (UID: \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\") " Nov 28 17:13:15 crc kubenswrapper[5024]: I1128 17:13:15.539737 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10aa5d41-7d59-435a-b4ca-97ad5dac5029-utilities\") pod \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\" (UID: \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\") " Nov 28 17:13:15 crc kubenswrapper[5024]: I1128 17:13:15.539808 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10aa5d41-7d59-435a-b4ca-97ad5dac5029-catalog-content\") pod \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\" (UID: \"10aa5d41-7d59-435a-b4ca-97ad5dac5029\") " Nov 28 17:13:15 crc kubenswrapper[5024]: I1128 17:13:15.541661 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10aa5d41-7d59-435a-b4ca-97ad5dac5029-utilities" (OuterVolumeSpecName: "utilities") pod "10aa5d41-7d59-435a-b4ca-97ad5dac5029" (UID: "10aa5d41-7d59-435a-b4ca-97ad5dac5029"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:13:15 crc kubenswrapper[5024]: I1128 17:13:15.547294 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10aa5d41-7d59-435a-b4ca-97ad5dac5029-kube-api-access-ndrcx" (OuterVolumeSpecName: "kube-api-access-ndrcx") pod "10aa5d41-7d59-435a-b4ca-97ad5dac5029" (UID: "10aa5d41-7d59-435a-b4ca-97ad5dac5029"). InnerVolumeSpecName "kube-api-access-ndrcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:13:15 crc kubenswrapper[5024]: I1128 17:13:15.641631 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndrcx\" (UniqueName: \"kubernetes.io/projected/10aa5d41-7d59-435a-b4ca-97ad5dac5029-kube-api-access-ndrcx\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:15 crc kubenswrapper[5024]: I1128 17:13:15.641673 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10aa5d41-7d59-435a-b4ca-97ad5dac5029-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:15 crc kubenswrapper[5024]: I1128 17:13:15.653278 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10aa5d41-7d59-435a-b4ca-97ad5dac5029-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10aa5d41-7d59-435a-b4ca-97ad5dac5029" (UID: "10aa5d41-7d59-435a-b4ca-97ad5dac5029"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:13:15 crc kubenswrapper[5024]: I1128 17:13:15.745937 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10aa5d41-7d59-435a-b4ca-97ad5dac5029-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:15 crc kubenswrapper[5024]: I1128 17:13:15.966439 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vfvp9" event={"ID":"10aa5d41-7d59-435a-b4ca-97ad5dac5029","Type":"ContainerDied","Data":"98460b9913953d0851b904e0fab449588439fc71caa6527fec75b105a479aea1"} Nov 28 17:13:15 crc kubenswrapper[5024]: I1128 17:13:15.966509 5024 scope.go:117] "RemoveContainer" containerID="5b5763f1896e6417af1668f009574fc35b78371220f891d5b179377b6f39ff56" Nov 28 17:13:15 crc kubenswrapper[5024]: I1128 17:13:15.966676 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vfvp9" Nov 28 17:13:16 crc kubenswrapper[5024]: I1128 17:13:16.004713 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vfvp9"] Nov 28 17:13:16 crc kubenswrapper[5024]: I1128 17:13:16.013414 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vfvp9"] Nov 28 17:13:16 crc kubenswrapper[5024]: I1128 17:13:16.326004 5024 scope.go:117] "RemoveContainer" containerID="e45ff47f068a3fdef7049d1e016d095f49c0a31d338bfb4402799c1601435ee7" Nov 28 17:13:16 crc kubenswrapper[5024]: I1128 17:13:16.396042 5024 scope.go:117] "RemoveContainer" containerID="5b042d9b890e6c1102928f0c54992d4fe2fa994efb07814a895a4426a4f03609" Nov 28 17:13:16 crc kubenswrapper[5024]: I1128 17:13:16.508210 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10aa5d41-7d59-435a-b4ca-97ad5dac5029" path="/var/lib/kubelet/pods/10aa5d41-7d59-435a-b4ca-97ad5dac5029/volumes" Nov 28 17:13:16 crc kubenswrapper[5024]: I1128 17:13:16.980652 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" event={"ID":"f89c9ab8-a552-4228-9dbc-2af4129a1be3","Type":"ContainerStarted","Data":"8f79fdf4be68cbdf4501a97bd8b96194b2bddab9005df6db50c790a891586fee"} Nov 28 17:13:16 crc kubenswrapper[5024]: I1128 17:13:16.981006 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:13:16 crc kubenswrapper[5024]: I1128 17:13:16.983745 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-zdmd8" event={"ID":"5a04dfeb-c7c2-443a-affd-11879c5e2b5d","Type":"ContainerStarted","Data":"20d59a5597eb32d565617ade93598ab32baa3a4c9f3bee46d2088fda06c03880"} Nov 28 17:13:16 crc kubenswrapper[5024]: I1128 17:13:16.984724 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" Nov 28 17:13:17 crc kubenswrapper[5024]: I1128 17:13:17.020157 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-d7f585bbf-gt482" podStartSLOduration=1.729436442 podStartE2EDuration="19.020131964s" podCreationTimestamp="2025-11-28 17:12:58 +0000 UTC" firstStartedPulling="2025-11-28 17:12:59.141414553 +0000 UTC m=+881.190335458" lastFinishedPulling="2025-11-28 17:13:16.432110075 +0000 UTC m=+898.481030980" observedRunningTime="2025-11-28 17:13:17.012738346 +0000 UTC m=+899.061659271" watchObservedRunningTime="2025-11-28 17:13:17.020131964 +0000 UTC m=+899.069052869" Nov 28 17:13:17 crc kubenswrapper[5024]: I1128 17:13:17.062778 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-zdmd8" podStartSLOduration=2.157403029 podStartE2EDuration="15.06275565s" podCreationTimestamp="2025-11-28 17:13:02 +0000 UTC" firstStartedPulling="2025-11-28 17:13:03.421468833 +0000 UTC m=+885.470389728" lastFinishedPulling="2025-11-28 17:13:16.326821444 +0000 UTC m=+898.375742349" observedRunningTime="2025-11-28 17:13:17.061461751 +0000 UTC m=+899.110382656" watchObservedRunningTime="2025-11-28 17:13:17.06275565 +0000 UTC m=+899.111676555" Nov 28 17:13:21 crc kubenswrapper[5024]: I1128 17:13:21.886839 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Nov 28 17:13:21 crc kubenswrapper[5024]: E1128 17:13:21.887930 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10aa5d41-7d59-435a-b4ca-97ad5dac5029" containerName="extract-utilities" Nov 28 17:13:21 crc kubenswrapper[5024]: I1128 17:13:21.887955 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="10aa5d41-7d59-435a-b4ca-97ad5dac5029" containerName="extract-utilities" Nov 28 17:13:21 crc kubenswrapper[5024]: E1128 17:13:21.887981 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10aa5d41-7d59-435a-b4ca-97ad5dac5029" containerName="extract-content" Nov 28 17:13:21 crc kubenswrapper[5024]: I1128 17:13:21.887992 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="10aa5d41-7d59-435a-b4ca-97ad5dac5029" containerName="extract-content" Nov 28 17:13:21 crc kubenswrapper[5024]: E1128 17:13:21.888042 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10aa5d41-7d59-435a-b4ca-97ad5dac5029" containerName="registry-server" Nov 28 17:13:21 crc kubenswrapper[5024]: I1128 17:13:21.888055 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="10aa5d41-7d59-435a-b4ca-97ad5dac5029" containerName="registry-server" Nov 28 17:13:21 crc kubenswrapper[5024]: I1128 17:13:21.888291 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="10aa5d41-7d59-435a-b4ca-97ad5dac5029" containerName="registry-server" Nov 28 17:13:21 crc kubenswrapper[5024]: I1128 17:13:21.889201 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 28 17:13:21 crc kubenswrapper[5024]: I1128 17:13:21.893924 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Nov 28 17:13:21 crc kubenswrapper[5024]: I1128 17:13:21.894118 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Nov 28 17:13:21 crc kubenswrapper[5024]: I1128 17:13:21.894189 5024 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-7bdv8" Nov 28 17:13:21 crc kubenswrapper[5024]: I1128 17:13:21.896125 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 28 17:13:22 crc kubenswrapper[5024]: I1128 17:13:22.072338 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpg58\" (UniqueName: \"kubernetes.io/projected/e91e40e8-38b1-46b8-8363-2d8ba5bc34a3-kube-api-access-gpg58\") pod \"minio\" (UID: \"e91e40e8-38b1-46b8-8363-2d8ba5bc34a3\") " pod="minio-dev/minio" Nov 28 17:13:22 crc kubenswrapper[5024]: I1128 17:13:22.072483 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-35711692-edab-4956-9ad2-cbf852e887fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35711692-edab-4956-9ad2-cbf852e887fb\") pod \"minio\" (UID: \"e91e40e8-38b1-46b8-8363-2d8ba5bc34a3\") " pod="minio-dev/minio" Nov 28 17:13:22 crc kubenswrapper[5024]: I1128 17:13:22.174547 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-35711692-edab-4956-9ad2-cbf852e887fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35711692-edab-4956-9ad2-cbf852e887fb\") pod \"minio\" (UID: \"e91e40e8-38b1-46b8-8363-2d8ba5bc34a3\") " pod="minio-dev/minio" Nov 28 17:13:22 crc kubenswrapper[5024]: I1128 17:13:22.174650 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpg58\" (UniqueName: \"kubernetes.io/projected/e91e40e8-38b1-46b8-8363-2d8ba5bc34a3-kube-api-access-gpg58\") pod \"minio\" (UID: \"e91e40e8-38b1-46b8-8363-2d8ba5bc34a3\") " pod="minio-dev/minio" Nov 28 17:13:22 crc kubenswrapper[5024]: I1128 17:13:22.179542 5024 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 17:13:22 crc kubenswrapper[5024]: I1128 17:13:22.179611 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-35711692-edab-4956-9ad2-cbf852e887fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35711692-edab-4956-9ad2-cbf852e887fb\") pod \"minio\" (UID: \"e91e40e8-38b1-46b8-8363-2d8ba5bc34a3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3ffa7c11db0720e9f5a480c3fb1163de39daef91cb8181c4647f5242469065c3/globalmount\"" pod="minio-dev/minio" Nov 28 17:13:22 crc kubenswrapper[5024]: I1128 17:13:22.203580 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpg58\" (UniqueName: \"kubernetes.io/projected/e91e40e8-38b1-46b8-8363-2d8ba5bc34a3-kube-api-access-gpg58\") pod \"minio\" (UID: \"e91e40e8-38b1-46b8-8363-2d8ba5bc34a3\") " pod="minio-dev/minio" Nov 28 17:13:22 crc kubenswrapper[5024]: I1128 17:13:22.218306 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-35711692-edab-4956-9ad2-cbf852e887fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35711692-edab-4956-9ad2-cbf852e887fb\") pod \"minio\" (UID: \"e91e40e8-38b1-46b8-8363-2d8ba5bc34a3\") " pod="minio-dev/minio" Nov 28 17:13:22 crc kubenswrapper[5024]: I1128 17:13:22.510942 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 28 17:13:22 crc kubenswrapper[5024]: I1128 17:13:22.976158 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 28 17:13:23 crc kubenswrapper[5024]: I1128 17:13:23.038239 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"e91e40e8-38b1-46b8-8363-2d8ba5bc34a3","Type":"ContainerStarted","Data":"e88708f9cfa482e01d088e24768855dacd05f066e977ba981cf757a7f33b60e3"} Nov 28 17:13:28 crc kubenswrapper[5024]: I1128 17:13:28.079919 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"e91e40e8-38b1-46b8-8363-2d8ba5bc34a3","Type":"ContainerStarted","Data":"851418b38d3847eba4b67e8c808532bf1786b6a35605994431ccf1d79acea476"} Nov 28 17:13:28 crc kubenswrapper[5024]: I1128 17:13:28.107747 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.694381482 podStartE2EDuration="9.107714376s" podCreationTimestamp="2025-11-28 17:13:19 +0000 UTC" firstStartedPulling="2025-11-28 17:13:22.990400196 +0000 UTC m=+905.039321101" lastFinishedPulling="2025-11-28 17:13:27.40373309 +0000 UTC m=+909.452653995" observedRunningTime="2025-11-28 17:13:28.098527275 +0000 UTC m=+910.147448190" watchObservedRunningTime="2025-11-28 17:13:28.107714376 +0000 UTC m=+910.156635291" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:31.999696 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7"] Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.001865 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.009142 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.009168 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.009142 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-hxgbp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.009382 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.017998 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.035591 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7"] Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.174528 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfbd5a2d-412b-4b26-9205-aaa29032a355-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.174576 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfbd5a2d-412b-4b26-9205-aaa29032a355-config\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.174615 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlvvb\" (UniqueName: \"kubernetes.io/projected/bfbd5a2d-412b-4b26-9205-aaa29032a355-kube-api-access-xlvvb\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.174655 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/bfbd5a2d-412b-4b26-9205-aaa29032a355-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.174699 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/bfbd5a2d-412b-4b26-9205-aaa29032a355-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.194493 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-9pdl6"] Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.195910 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.198757 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.199291 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.199441 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.220919 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-9pdl6"] Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.276516 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/bfbd5a2d-412b-4b26-9205-aaa29032a355-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.276620 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfbd5a2d-412b-4b26-9205-aaa29032a355-config\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.276645 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfbd5a2d-412b-4b26-9205-aaa29032a355-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.276674 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlvvb\" (UniqueName: \"kubernetes.io/projected/bfbd5a2d-412b-4b26-9205-aaa29032a355-kube-api-access-xlvvb\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.276703 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/bfbd5a2d-412b-4b26-9205-aaa29032a355-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.277930 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfbd5a2d-412b-4b26-9205-aaa29032a355-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.278046 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfbd5a2d-412b-4b26-9205-aaa29032a355-config\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.284934 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/bfbd5a2d-412b-4b26-9205-aaa29032a355-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.309452 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlvvb\" (UniqueName: \"kubernetes.io/projected/bfbd5a2d-412b-4b26-9205-aaa29032a355-kube-api-access-xlvvb\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.315745 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/bfbd5a2d-412b-4b26-9205-aaa29032a355-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-mm6j7\" (UID: \"bfbd5a2d-412b-4b26-9205-aaa29032a355\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.317788 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp"] Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.318772 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.330499 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.330770 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.332543 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.379306 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp"] Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.380639 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c34353-2dbd-495c-9fc8-44773dc2bd68-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.380823 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c34353-2dbd-495c-9fc8-44773dc2bd68-config\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.380926 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f9c34353-2dbd-495c-9fc8-44773dc2bd68-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.381013 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/f9c34353-2dbd-495c-9fc8-44773dc2bd68-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.381122 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/f9c34353-2dbd-495c-9fc8-44773dc2bd68-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.381206 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzxfl\" (UniqueName: \"kubernetes.io/projected/f9c34353-2dbd-495c-9fc8-44773dc2bd68-kube-api-access-zzxfl\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.482437 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f9c34353-2dbd-495c-9fc8-44773dc2bd68-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.482501 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/f9c34353-2dbd-495c-9fc8-44773dc2bd68-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.482531 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/f9c34353-2dbd-495c-9fc8-44773dc2bd68-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.482557 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.482583 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzxfl\" (UniqueName: \"kubernetes.io/projected/f9c34353-2dbd-495c-9fc8-44773dc2bd68-kube-api-access-zzxfl\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.482609 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.482628 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st59c\" (UniqueName: \"kubernetes.io/projected/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-kube-api-access-st59c\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.482674 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c34353-2dbd-495c-9fc8-44773dc2bd68-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.482701 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.482734 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-config\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.482766 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c34353-2dbd-495c-9fc8-44773dc2bd68-config\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.483770 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c34353-2dbd-495c-9fc8-44773dc2bd68-config\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.489220 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f9c34353-2dbd-495c-9fc8-44773dc2bd68-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.492579 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/f9c34353-2dbd-495c-9fc8-44773dc2bd68-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.494370 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c34353-2dbd-495c-9fc8-44773dc2bd68-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.506677 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/f9c34353-2dbd-495c-9fc8-44773dc2bd68-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.536932 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzxfl\" (UniqueName: \"kubernetes.io/projected/f9c34353-2dbd-495c-9fc8-44773dc2bd68-kube-api-access-zzxfl\") pod \"logging-loki-querier-5895d59bb8-9pdl6\" (UID: \"f9c34353-2dbd-495c-9fc8-44773dc2bd68\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.586098 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.586172 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-config\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.586229 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.586257 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.586276 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st59c\" (UniqueName: \"kubernetes.io/projected/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-kube-api-access-st59c\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.587963 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.589515 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-config\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.592608 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.595801 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm"] Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.597041 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.614080 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.614301 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.614523 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.614670 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.618355 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.640948 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st59c\" (UniqueName: \"kubernetes.io/projected/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-kube-api-access-st59c\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.641560 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/98fe9e7c-1bfa-4f87-8c04-7c0a660db429-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-jzttp\" (UID: \"98fe9e7c-1bfa-4f87-8c04-7c0a660db429\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.690399 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-lokistack-gateway\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.690494 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-rbac\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.690559 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x6v5\" (UniqueName: \"kubernetes.io/projected/c46c86f9-64ab-4020-9c49-799d926ba3ad-kube-api-access-9x6v5\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.690619 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c46c86f9-64ab-4020-9c49-799d926ba3ad-tenants\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.690689 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c46c86f9-64ab-4020-9c49-799d926ba3ad-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.690760 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.690804 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.690835 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c46c86f9-64ab-4020-9c49-799d926ba3ad-tls-secret\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.692719 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm"] Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.711240 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr"] Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.715593 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.719109 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr"] Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.720093 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-4lllb" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.749732 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.793872 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/000df583-958f-43ae-b8f5-36a537d3d3d8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794365 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/000df583-958f-43ae-b8f5-36a537d3d3d8-rbac\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794391 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/000df583-958f-43ae-b8f5-36a537d3d3d8-lokistack-gateway\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794435 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-lokistack-gateway\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794460 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/000df583-958f-43ae-b8f5-36a537d3d3d8-tenants\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794496 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-rbac\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794519 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2cx8\" (UniqueName: \"kubernetes.io/projected/000df583-958f-43ae-b8f5-36a537d3d3d8-kube-api-access-c2cx8\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794541 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/000df583-958f-43ae-b8f5-36a537d3d3d8-tls-secret\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794570 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/000df583-958f-43ae-b8f5-36a537d3d3d8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794605 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x6v5\" (UniqueName: \"kubernetes.io/projected/c46c86f9-64ab-4020-9c49-799d926ba3ad-kube-api-access-9x6v5\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794651 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/000df583-958f-43ae-b8f5-36a537d3d3d8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794677 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c46c86f9-64ab-4020-9c49-799d926ba3ad-tenants\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794704 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c46c86f9-64ab-4020-9c49-799d926ba3ad-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794735 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794760 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.794780 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c46c86f9-64ab-4020-9c49-799d926ba3ad-tls-secret\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: E1128 17:13:32.794948 5024 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Nov 28 17:13:32 crc kubenswrapper[5024]: E1128 17:13:32.795010 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c46c86f9-64ab-4020-9c49-799d926ba3ad-tls-secret podName:c46c86f9-64ab-4020-9c49-799d926ba3ad nodeName:}" failed. No retries permitted until 2025-11-28 17:13:33.294987653 +0000 UTC m=+915.343908558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/c46c86f9-64ab-4020-9c49-799d926ba3ad-tls-secret") pod "logging-loki-gateway-8f58fb6f6-zdmvm" (UID: "c46c86f9-64ab-4020-9c49-799d926ba3ad") : secret "logging-loki-gateway-http" not found Nov 28 17:13:32 crc kubenswrapper[5024]: E1128 17:13:32.795349 5024 configmap.go:193] Couldn't get configMap openshift-logging/logging-loki-gateway-ca-bundle: configmap "logging-loki-gateway-ca-bundle" not found Nov 28 17:13:32 crc kubenswrapper[5024]: E1128 17:13:32.795377 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-logging-loki-gateway-ca-bundle podName:c46c86f9-64ab-4020-9c49-799d926ba3ad nodeName:}" failed. No retries permitted until 2025-11-28 17:13:33.295369045 +0000 UTC m=+915.344289950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "logging-loki-gateway-ca-bundle" (UniqueName: "kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-logging-loki-gateway-ca-bundle") pod "logging-loki-gateway-8f58fb6f6-zdmvm" (UID: "c46c86f9-64ab-4020-9c49-799d926ba3ad") : configmap "logging-loki-gateway-ca-bundle" not found Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.796551 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-rbac\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.798253 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.799451 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c46c86f9-64ab-4020-9c49-799d926ba3ad-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.803483 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-lokistack-gateway\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.811663 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c46c86f9-64ab-4020-9c49-799d926ba3ad-tenants\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.816735 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.820964 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x6v5\" (UniqueName: \"kubernetes.io/projected/c46c86f9-64ab-4020-9c49-799d926ba3ad-kube-api-access-9x6v5\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.896729 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/000df583-958f-43ae-b8f5-36a537d3d3d8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.896810 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/000df583-958f-43ae-b8f5-36a537d3d3d8-rbac\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.896835 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/000df583-958f-43ae-b8f5-36a537d3d3d8-lokistack-gateway\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.896873 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/000df583-958f-43ae-b8f5-36a537d3d3d8-tenants\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.896908 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2cx8\" (UniqueName: \"kubernetes.io/projected/000df583-958f-43ae-b8f5-36a537d3d3d8-kube-api-access-c2cx8\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.896929 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/000df583-958f-43ae-b8f5-36a537d3d3d8-tls-secret\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.896956 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/000df583-958f-43ae-b8f5-36a537d3d3d8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.897002 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/000df583-958f-43ae-b8f5-36a537d3d3d8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.897800 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/000df583-958f-43ae-b8f5-36a537d3d3d8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: E1128 17:13:32.897882 5024 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Nov 28 17:13:32 crc kubenswrapper[5024]: E1128 17:13:32.897997 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/000df583-958f-43ae-b8f5-36a537d3d3d8-tls-secret podName:000df583-958f-43ae-b8f5-36a537d3d3d8 nodeName:}" failed. No retries permitted until 2025-11-28 17:13:33.397974878 +0000 UTC m=+915.446895783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/000df583-958f-43ae-b8f5-36a537d3d3d8-tls-secret") pod "logging-loki-gateway-8f58fb6f6-qsbvr" (UID: "000df583-958f-43ae-b8f5-36a537d3d3d8") : secret "logging-loki-gateway-http" not found Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.898898 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/000df583-958f-43ae-b8f5-36a537d3d3d8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.899004 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/000df583-958f-43ae-b8f5-36a537d3d3d8-rbac\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.899433 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/000df583-958f-43ae-b8f5-36a537d3d3d8-lokistack-gateway\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.902308 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/000df583-958f-43ae-b8f5-36a537d3d3d8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.902536 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/000df583-958f-43ae-b8f5-36a537d3d3d8-tenants\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:32 crc kubenswrapper[5024]: I1128 17:13:32.919062 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2cx8\" (UniqueName: \"kubernetes.io/projected/000df583-958f-43ae-b8f5-36a537d3d3d8-kube-api-access-c2cx8\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.131128 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp"] Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.157692 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7"] Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.181863 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.189356 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.194608 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.195002 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.199923 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.302630 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.304916 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.307133 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jj9q\" (UniqueName: \"kubernetes.io/projected/91551520-15fb-40e8-9289-842fbcfadb7f-kube-api-access-6jj9q\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.307270 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e9375574-1a44-4d08-9cff-e63546b68642\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9375574-1a44-4d08-9cff-e63546b68642\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.307308 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91551520-15fb-40e8-9289-842fbcfadb7f-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.307346 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/91551520-15fb-40e8-9289-842fbcfadb7f-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.307385 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/91551520-15fb-40e8-9289-842fbcfadb7f-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.307428 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.307498 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c46c86f9-64ab-4020-9c49-799d926ba3ad-tls-secret\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.307438 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.308484 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c46c86f9-64ab-4020-9c49-799d926ba3ad-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.307486 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.308590 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91551520-15fb-40e8-9289-842fbcfadb7f-config\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.308827 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/91551520-15fb-40e8-9289-842fbcfadb7f-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.308956 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7fcdc578-227f-4115-b27b-718bd67935a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fcdc578-227f-4115-b27b-718bd67935a0\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.315229 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c46c86f9-64ab-4020-9c49-799d926ba3ad-tls-secret\") pod \"logging-loki-gateway-8f58fb6f6-zdmvm\" (UID: \"c46c86f9-64ab-4020-9c49-799d926ba3ad\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.326393 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.392735 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-9pdl6"] Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.411116 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jj9q\" (UniqueName: \"kubernetes.io/projected/91551520-15fb-40e8-9289-842fbcfadb7f-kube-api-access-6jj9q\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.411197 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50be66da-8b03-4827-8012-25c2140b64ac-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.411247 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e9375574-1a44-4d08-9cff-e63546b68642\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9375574-1a44-4d08-9cff-e63546b68642\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.411278 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91551520-15fb-40e8-9289-842fbcfadb7f-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.411314 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krmgb\" (UniqueName: \"kubernetes.io/projected/50be66da-8b03-4827-8012-25c2140b64ac-kube-api-access-krmgb\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.411342 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/91551520-15fb-40e8-9289-842fbcfadb7f-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.411486 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/91551520-15fb-40e8-9289-842fbcfadb7f-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.412141 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50be66da-8b03-4827-8012-25c2140b64ac-config\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.412202 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91551520-15fb-40e8-9289-842fbcfadb7f-config\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.412245 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/91551520-15fb-40e8-9289-842fbcfadb7f-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.412271 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/50be66da-8b03-4827-8012-25c2140b64ac-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.412273 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91551520-15fb-40e8-9289-842fbcfadb7f-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.412296 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/50be66da-8b03-4827-8012-25c2140b64ac-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.412483 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6f59e3e4-657e-4c9e-80f4-fb477c94abf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f59e3e4-657e-4c9e-80f4-fb477c94abf2\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.412586 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7fcdc578-227f-4115-b27b-718bd67935a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fcdc578-227f-4115-b27b-718bd67935a0\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.412651 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/50be66da-8b03-4827-8012-25c2140b64ac-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.412819 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/000df583-958f-43ae-b8f5-36a537d3d3d8-tls-secret\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.413992 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91551520-15fb-40e8-9289-842fbcfadb7f-config\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.415955 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/91551520-15fb-40e8-9289-842fbcfadb7f-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.417587 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/000df583-958f-43ae-b8f5-36a537d3d3d8-tls-secret\") pod \"logging-loki-gateway-8f58fb6f6-qsbvr\" (UID: \"000df583-958f-43ae-b8f5-36a537d3d3d8\") " pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.417755 5024 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.417786 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7fcdc578-227f-4115-b27b-718bd67935a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fcdc578-227f-4115-b27b-718bd67935a0\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5fd094e3c5363439970a4ddae18e801737b0f02ed99abb5077293d77060c95b1/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.418197 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/91551520-15fb-40e8-9289-842fbcfadb7f-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.418622 5024 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.418656 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e9375574-1a44-4d08-9cff-e63546b68642\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9375574-1a44-4d08-9cff-e63546b68642\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6a69fd893461d7331803cbccddf7fa988bad989ea8acd69e6bc382eee2687d7c/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.418884 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/91551520-15fb-40e8-9289-842fbcfadb7f-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.433504 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jj9q\" (UniqueName: \"kubernetes.io/projected/91551520-15fb-40e8-9289-842fbcfadb7f-kube-api-access-6jj9q\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.445898 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e9375574-1a44-4d08-9cff-e63546b68642\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e9375574-1a44-4d08-9cff-e63546b68642\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.447448 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7fcdc578-227f-4115-b27b-718bd67935a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fcdc578-227f-4115-b27b-718bd67935a0\") pod \"logging-loki-ingester-0\" (UID: \"91551520-15fb-40e8-9289-842fbcfadb7f\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.484943 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.486058 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.489611 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.490575 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.509945 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.514535 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6f59e3e4-657e-4c9e-80f4-fb477c94abf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f59e3e4-657e-4c9e-80f4-fb477c94abf2\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.514624 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/50be66da-8b03-4827-8012-25c2140b64ac-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.514756 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50be66da-8b03-4827-8012-25c2140b64ac-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.514817 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krmgb\" (UniqueName: \"kubernetes.io/projected/50be66da-8b03-4827-8012-25c2140b64ac-kube-api-access-krmgb\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.514876 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50be66da-8b03-4827-8012-25c2140b64ac-config\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.514977 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/50be66da-8b03-4827-8012-25c2140b64ac-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.515007 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/50be66da-8b03-4827-8012-25c2140b64ac-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.516768 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50be66da-8b03-4827-8012-25c2140b64ac-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.516782 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50be66da-8b03-4827-8012-25c2140b64ac-config\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.520145 5024 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.520174 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6f59e3e4-657e-4c9e-80f4-fb477c94abf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f59e3e4-657e-4c9e-80f4-fb477c94abf2\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f91c04b7d9c77c9fb084039d7a9d4761faa89b21a7a945f78925b5d605f5a29/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.520411 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/50be66da-8b03-4827-8012-25c2140b64ac-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.520453 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/50be66da-8b03-4827-8012-25c2140b64ac-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.520686 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.522384 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/50be66da-8b03-4827-8012-25c2140b64ac-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.543587 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krmgb\" (UniqueName: \"kubernetes.io/projected/50be66da-8b03-4827-8012-25c2140b64ac-kube-api-access-krmgb\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.562508 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6f59e3e4-657e-4c9e-80f4-fb477c94abf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f59e3e4-657e-4c9e-80f4-fb477c94abf2\") pod \"logging-loki-compactor-0\" (UID: \"50be66da-8b03-4827-8012-25c2140b64ac\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.575471 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.616459 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.618110 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-599a397c-0029-40fe-8800-c7e7f642ee72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-599a397c-0029-40fe-8800-c7e7f642ee72\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.618418 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.618509 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv8bh\" (UniqueName: \"kubernetes.io/projected/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-kube-api-access-gv8bh\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.618707 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.618963 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.619048 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-config\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.647488 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.673095 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.721070 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.721160 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-599a397c-0029-40fe-8800-c7e7f642ee72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-599a397c-0029-40fe-8800-c7e7f642ee72\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.721216 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.721257 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv8bh\" (UniqueName: \"kubernetes.io/projected/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-kube-api-access-gv8bh\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.721294 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.721328 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.721355 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-config\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.723036 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-config\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.725701 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.727586 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.728631 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.730917 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.733155 5024 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.733223 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-599a397c-0029-40fe-8800-c7e7f642ee72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-599a397c-0029-40fe-8800-c7e7f642ee72\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2ce41222f78b835844bd6983344eac29f6dc27289cbfc007ad813cbc384611d8/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.744882 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv8bh\" (UniqueName: \"kubernetes.io/projected/15f007e2-eb1e-43b1-94cd-cf82cfadad4e-kube-api-access-gv8bh\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.768620 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-599a397c-0029-40fe-8800-c7e7f642ee72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-599a397c-0029-40fe-8800-c7e7f642ee72\") pod \"logging-loki-index-gateway-0\" (UID: \"15f007e2-eb1e-43b1-94cd-cf82cfadad4e\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.808894 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:33 crc kubenswrapper[5024]: I1128 17:13:33.971935 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 28 17:13:33 crc kubenswrapper[5024]: W1128 17:13:33.994388 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91551520_15fb_40e8_9289_842fbcfadb7f.slice/crio-ff427ec7e3c3e0c789599bf42e90ecaf49c5e83127acbc778e95e05428bf6143 WatchSource:0}: Error finding container ff427ec7e3c3e0c789599bf42e90ecaf49c5e83127acbc778e95e05428bf6143: Status 404 returned error can't find the container with id ff427ec7e3c3e0c789599bf42e90ecaf49c5e83127acbc778e95e05428bf6143 Nov 28 17:13:34 crc kubenswrapper[5024]: I1128 17:13:34.080767 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm"] Nov 28 17:13:34 crc kubenswrapper[5024]: W1128 17:13:34.096011 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc46c86f9_64ab_4020_9c49_799d926ba3ad.slice/crio-3fd90cf16a62b7e3a60cb22a518c0605e19b7ddb68c082464e9558aaf493adc8 WatchSource:0}: Error finding container 3fd90cf16a62b7e3a60cb22a518c0605e19b7ddb68c082464e9558aaf493adc8: Status 404 returned error can't find the container with id 3fd90cf16a62b7e3a60cb22a518c0605e19b7ddb68c082464e9558aaf493adc8 Nov 28 17:13:34 crc kubenswrapper[5024]: I1128 17:13:34.184557 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" event={"ID":"98fe9e7c-1bfa-4f87-8c04-7c0a660db429","Type":"ContainerStarted","Data":"ffbc7cca62a235d731ee59a38d7da1f87ff6a90ea09ec7b8e44ac2e4be915a9a"} Nov 28 17:13:34 crc kubenswrapper[5024]: I1128 17:13:34.186442 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" event={"ID":"bfbd5a2d-412b-4b26-9205-aaa29032a355","Type":"ContainerStarted","Data":"f468978d1dcd8dc428ac1f8d4082bb3764251532c695ef52fd30bdc53793da07"} Nov 28 17:13:34 crc kubenswrapper[5024]: I1128 17:13:34.188501 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" event={"ID":"f9c34353-2dbd-495c-9fc8-44773dc2bd68","Type":"ContainerStarted","Data":"06c761a3647cb70527dae7654b57c5b884e6fdc46fbbbb6cd1e9d7334f25ac5a"} Nov 28 17:13:34 crc kubenswrapper[5024]: I1128 17:13:34.189413 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" event={"ID":"c46c86f9-64ab-4020-9c49-799d926ba3ad","Type":"ContainerStarted","Data":"3fd90cf16a62b7e3a60cb22a518c0605e19b7ddb68c082464e9558aaf493adc8"} Nov 28 17:13:34 crc kubenswrapper[5024]: I1128 17:13:34.190724 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"91551520-15fb-40e8-9289-842fbcfadb7f","Type":"ContainerStarted","Data":"ff427ec7e3c3e0c789599bf42e90ecaf49c5e83127acbc778e95e05428bf6143"} Nov 28 17:13:34 crc kubenswrapper[5024]: I1128 17:13:34.224225 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr"] Nov 28 17:13:34 crc kubenswrapper[5024]: W1128 17:13:34.230011 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod000df583_958f_43ae_b8f5_36a537d3d3d8.slice/crio-1a18ade4beac2168602fd509a86d59841c142ac3e92191b5f3da624106c61d7e WatchSource:0}: Error finding container 1a18ade4beac2168602fd509a86d59841c142ac3e92191b5f3da624106c61d7e: Status 404 returned error can't find the container with id 1a18ade4beac2168602fd509a86d59841c142ac3e92191b5f3da624106c61d7e Nov 28 17:13:34 crc kubenswrapper[5024]: W1128 17:13:34.231300 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50be66da_8b03_4827_8012_25c2140b64ac.slice/crio-1c71db74fa8ab972838352cb7c575d21955032f7055bdc7937e2356b379d4496 WatchSource:0}: Error finding container 1c71db74fa8ab972838352cb7c575d21955032f7055bdc7937e2356b379d4496: Status 404 returned error can't find the container with id 1c71db74fa8ab972838352cb7c575d21955032f7055bdc7937e2356b379d4496 Nov 28 17:13:34 crc kubenswrapper[5024]: I1128 17:13:34.236675 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 28 17:13:34 crc kubenswrapper[5024]: I1128 17:13:34.303542 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 28 17:13:35 crc kubenswrapper[5024]: I1128 17:13:35.198277 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" event={"ID":"000df583-958f-43ae-b8f5-36a537d3d3d8","Type":"ContainerStarted","Data":"1a18ade4beac2168602fd509a86d59841c142ac3e92191b5f3da624106c61d7e"} Nov 28 17:13:35 crc kubenswrapper[5024]: I1128 17:13:35.202382 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"15f007e2-eb1e-43b1-94cd-cf82cfadad4e","Type":"ContainerStarted","Data":"5b546e158cc8af3ec13bf66d65c540456d21a1d64defeda0e54f9583585d0b65"} Nov 28 17:13:35 crc kubenswrapper[5024]: I1128 17:13:35.203721 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"50be66da-8b03-4827-8012-25c2140b64ac","Type":"ContainerStarted","Data":"1c71db74fa8ab972838352cb7c575d21955032f7055bdc7937e2356b379d4496"} Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.227357 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"91551520-15fb-40e8-9289-842fbcfadb7f","Type":"ContainerStarted","Data":"9ee35f53f70b764bede4d70cf55cacd6a4e29449e7b7b14cbba2c87543c56df1"} Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.228263 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.231109 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" event={"ID":"000df583-958f-43ae-b8f5-36a537d3d3d8","Type":"ContainerStarted","Data":"3ee8c2611a8635c54402556110bc813fb1d879a3ebfae9e114bbab67615cf366"} Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.233147 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"15f007e2-eb1e-43b1-94cd-cf82cfadad4e","Type":"ContainerStarted","Data":"dd2933bd779121debbbb411fd67706c9d621a384b4642d5d2aa9f519a43be497"} Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.233305 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.234778 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" event={"ID":"98fe9e7c-1bfa-4f87-8c04-7c0a660db429","Type":"ContainerStarted","Data":"2213b7bf7f46b29a3e8706a955ffc0ca663450ecc03473d876ee230412722dfd"} Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.235552 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.237274 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" event={"ID":"bfbd5a2d-412b-4b26-9205-aaa29032a355","Type":"ContainerStarted","Data":"1fc32eda61246fe741a23f67a6193b1432380e82aaefd6c3a4f958c5e47c9268"} Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.237715 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.239843 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"50be66da-8b03-4827-8012-25c2140b64ac","Type":"ContainerStarted","Data":"4d57491efd55fe65457b249358bb8a93f69aa4cb64b50f802bcb2f90727e5dcf"} Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.239985 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.241861 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" event={"ID":"f9c34353-2dbd-495c-9fc8-44773dc2bd68","Type":"ContainerStarted","Data":"03d53e84bcb796dfc4b968021a78810cb7da6c06a13af030f38ac52205f953fc"} Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.242350 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.243743 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" event={"ID":"c46c86f9-64ab-4020-9c49-799d926ba3ad","Type":"ContainerStarted","Data":"b4ad6e34c48898e730d0dab3c5e3032ef1318fb0ede29f1ba28be904c03ec741"} Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.253269 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.119354831 podStartE2EDuration="6.253244403s" podCreationTimestamp="2025-11-28 17:13:32 +0000 UTC" firstStartedPulling="2025-11-28 17:13:33.998473799 +0000 UTC m=+916.047394734" lastFinishedPulling="2025-11-28 17:13:37.132363401 +0000 UTC m=+919.181284306" observedRunningTime="2025-11-28 17:13:38.247227615 +0000 UTC m=+920.296148520" watchObservedRunningTime="2025-11-28 17:13:38.253244403 +0000 UTC m=+920.302165308" Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.275507 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.329184214 podStartE2EDuration="6.275489778s" podCreationTimestamp="2025-11-28 17:13:32 +0000 UTC" firstStartedPulling="2025-11-28 17:13:34.23807309 +0000 UTC m=+916.286993995" lastFinishedPulling="2025-11-28 17:13:37.184378654 +0000 UTC m=+919.233299559" observedRunningTime="2025-11-28 17:13:38.269842412 +0000 UTC m=+920.318763327" watchObservedRunningTime="2025-11-28 17:13:38.275489778 +0000 UTC m=+920.324410693" Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.293475 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" podStartSLOduration=2.506706426 podStartE2EDuration="6.293452497s" podCreationTimestamp="2025-11-28 17:13:32 +0000 UTC" firstStartedPulling="2025-11-28 17:13:33.402221548 +0000 UTC m=+915.451142453" lastFinishedPulling="2025-11-28 17:13:37.188967619 +0000 UTC m=+919.237888524" observedRunningTime="2025-11-28 17:13:38.291321795 +0000 UTC m=+920.340242700" watchObservedRunningTime="2025-11-28 17:13:38.293452497 +0000 UTC m=+920.342373402" Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.331795 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" podStartSLOduration=2.303952362 podStartE2EDuration="6.331767006s" podCreationTimestamp="2025-11-28 17:13:32 +0000 UTC" firstStartedPulling="2025-11-28 17:13:33.160255348 +0000 UTC m=+915.209176253" lastFinishedPulling="2025-11-28 17:13:37.188069992 +0000 UTC m=+919.236990897" observedRunningTime="2025-11-28 17:13:38.310332155 +0000 UTC m=+920.359253060" watchObservedRunningTime="2025-11-28 17:13:38.331767006 +0000 UTC m=+920.380687911" Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.335969 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" podStartSLOduration=3.320081047 podStartE2EDuration="7.33595152s" podCreationTimestamp="2025-11-28 17:13:31 +0000 UTC" firstStartedPulling="2025-11-28 17:13:33.174045534 +0000 UTC m=+915.222966439" lastFinishedPulling="2025-11-28 17:13:37.189915987 +0000 UTC m=+919.238836912" observedRunningTime="2025-11-28 17:13:38.330264292 +0000 UTC m=+920.379185217" watchObservedRunningTime="2025-11-28 17:13:38.33595152 +0000 UTC m=+920.384872415" Nov 28 17:13:38 crc kubenswrapper[5024]: I1128 17:13:38.354160 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.4745751289999998 podStartE2EDuration="6.354135936s" podCreationTimestamp="2025-11-28 17:13:32 +0000 UTC" firstStartedPulling="2025-11-28 17:13:34.310735151 +0000 UTC m=+916.359656056" lastFinishedPulling="2025-11-28 17:13:37.190295958 +0000 UTC m=+919.239216863" observedRunningTime="2025-11-28 17:13:38.349948962 +0000 UTC m=+920.398869877" watchObservedRunningTime="2025-11-28 17:13:38.354135936 +0000 UTC m=+920.403056841" Nov 28 17:13:40 crc kubenswrapper[5024]: I1128 17:13:40.260165 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" event={"ID":"c46c86f9-64ab-4020-9c49-799d926ba3ad","Type":"ContainerStarted","Data":"67c74581425da14fdda5ad73f858cd75c7b76333f7206c26aad8cccb7e533775"} Nov 28 17:13:40 crc kubenswrapper[5024]: I1128 17:13:40.260883 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:40 crc kubenswrapper[5024]: I1128 17:13:40.260929 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:40 crc kubenswrapper[5024]: I1128 17:13:40.262307 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" event={"ID":"000df583-958f-43ae-b8f5-36a537d3d3d8","Type":"ContainerStarted","Data":"a1b1c25495abd886407f63e9278bc6e36eea91cc7e2b72b844db9bc3f7264d69"} Nov 28 17:13:40 crc kubenswrapper[5024]: I1128 17:13:40.263259 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:40 crc kubenswrapper[5024]: I1128 17:13:40.263301 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:40 crc kubenswrapper[5024]: I1128 17:13:40.275627 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:40 crc kubenswrapper[5024]: I1128 17:13:40.277844 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:40 crc kubenswrapper[5024]: I1128 17:13:40.279184 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" Nov 28 17:13:40 crc kubenswrapper[5024]: I1128 17:13:40.287081 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-zdmvm" podStartSLOduration=2.686921807 podStartE2EDuration="8.287068657s" podCreationTimestamp="2025-11-28 17:13:32 +0000 UTC" firstStartedPulling="2025-11-28 17:13:34.099062883 +0000 UTC m=+916.147983808" lastFinishedPulling="2025-11-28 17:13:39.699209753 +0000 UTC m=+921.748130658" observedRunningTime="2025-11-28 17:13:40.284669536 +0000 UTC m=+922.333590461" watchObservedRunningTime="2025-11-28 17:13:40.287068657 +0000 UTC m=+922.335989562" Nov 28 17:13:40 crc kubenswrapper[5024]: I1128 17:13:40.289781 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" Nov 28 17:13:40 crc kubenswrapper[5024]: I1128 17:13:40.336224 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-8f58fb6f6-qsbvr" podStartSLOduration=2.864188821 podStartE2EDuration="8.336201785s" podCreationTimestamp="2025-11-28 17:13:32 +0000 UTC" firstStartedPulling="2025-11-28 17:13:34.231706042 +0000 UTC m=+916.280626947" lastFinishedPulling="2025-11-28 17:13:39.703719006 +0000 UTC m=+921.752639911" observedRunningTime="2025-11-28 17:13:40.332206917 +0000 UTC m=+922.381127822" watchObservedRunningTime="2025-11-28 17:13:40.336201785 +0000 UTC m=+922.385122690" Nov 28 17:13:47 crc kubenswrapper[5024]: I1128 17:13:47.873080 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dvxlz"] Nov 28 17:13:47 crc kubenswrapper[5024]: I1128 17:13:47.876618 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:47 crc kubenswrapper[5024]: I1128 17:13:47.878379 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dvxlz"] Nov 28 17:13:47 crc kubenswrapper[5024]: I1128 17:13:47.911347 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-utilities\") pod \"community-operators-dvxlz\" (UID: \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\") " pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:47 crc kubenswrapper[5024]: I1128 17:13:47.911725 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjp7r\" (UniqueName: \"kubernetes.io/projected/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-kube-api-access-vjp7r\") pod \"community-operators-dvxlz\" (UID: \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\") " pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:47 crc kubenswrapper[5024]: I1128 17:13:47.911832 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-catalog-content\") pod \"community-operators-dvxlz\" (UID: \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\") " pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:48 crc kubenswrapper[5024]: I1128 17:13:48.013381 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjp7r\" (UniqueName: \"kubernetes.io/projected/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-kube-api-access-vjp7r\") pod \"community-operators-dvxlz\" (UID: \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\") " pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:48 crc kubenswrapper[5024]: I1128 17:13:48.013797 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-catalog-content\") pod \"community-operators-dvxlz\" (UID: \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\") " pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:48 crc kubenswrapper[5024]: I1128 17:13:48.013824 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-utilities\") pod \"community-operators-dvxlz\" (UID: \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\") " pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:48 crc kubenswrapper[5024]: I1128 17:13:48.014676 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-catalog-content\") pod \"community-operators-dvxlz\" (UID: \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\") " pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:48 crc kubenswrapper[5024]: I1128 17:13:48.014816 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-utilities\") pod \"community-operators-dvxlz\" (UID: \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\") " pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:48 crc kubenswrapper[5024]: I1128 17:13:48.038581 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjp7r\" (UniqueName: \"kubernetes.io/projected/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-kube-api-access-vjp7r\") pod \"community-operators-dvxlz\" (UID: \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\") " pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:48 crc kubenswrapper[5024]: I1128 17:13:48.199914 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:48 crc kubenswrapper[5024]: I1128 17:13:48.748864 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dvxlz"] Nov 28 17:13:48 crc kubenswrapper[5024]: W1128 17:13:48.751652 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e28c2f6_d7ef_4d61_a78a_c4c76b4aa2d3.slice/crio-6fe17933f1e2fbd710a1736141a1fda0e552220ffc55210ad73fc34a366425f8 WatchSource:0}: Error finding container 6fe17933f1e2fbd710a1736141a1fda0e552220ffc55210ad73fc34a366425f8: Status 404 returned error can't find the container with id 6fe17933f1e2fbd710a1736141a1fda0e552220ffc55210ad73fc34a366425f8 Nov 28 17:13:49 crc kubenswrapper[5024]: I1128 17:13:49.333694 5024 generic.go:334] "Generic (PLEG): container finished" podID="8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" containerID="8e2ff02328ce8f384ca412b626ebf5447704b591359859d30098b3c01247248b" exitCode=0 Nov 28 17:13:49 crc kubenswrapper[5024]: I1128 17:13:49.333812 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvxlz" event={"ID":"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3","Type":"ContainerDied","Data":"8e2ff02328ce8f384ca412b626ebf5447704b591359859d30098b3c01247248b"} Nov 28 17:13:49 crc kubenswrapper[5024]: I1128 17:13:49.334134 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvxlz" event={"ID":"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3","Type":"ContainerStarted","Data":"6fe17933f1e2fbd710a1736141a1fda0e552220ffc55210ad73fc34a366425f8"} Nov 28 17:13:51 crc kubenswrapper[5024]: I1128 17:13:51.351148 5024 generic.go:334] "Generic (PLEG): container finished" podID="8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" containerID="b2ed89e411b98bf961d6850fef02917c01eb9523e75cd74f59c3827191bc7a13" exitCode=0 Nov 28 17:13:51 crc kubenswrapper[5024]: I1128 17:13:51.351281 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvxlz" event={"ID":"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3","Type":"ContainerDied","Data":"b2ed89e411b98bf961d6850fef02917c01eb9523e75cd74f59c3827191bc7a13"} Nov 28 17:13:52 crc kubenswrapper[5024]: I1128 17:13:52.338009 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-mm6j7" Nov 28 17:13:52 crc kubenswrapper[5024]: I1128 17:13:52.364306 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvxlz" event={"ID":"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3","Type":"ContainerStarted","Data":"fe0fb21cdae6c3e173cf5f74b184ef4d719a2fc61dfc741bb233db4e7d08203a"} Nov 28 17:13:52 crc kubenswrapper[5024]: I1128 17:13:52.395799 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dvxlz" podStartSLOduration=2.870600086 podStartE2EDuration="5.39577418s" podCreationTimestamp="2025-11-28 17:13:47 +0000 UTC" firstStartedPulling="2025-11-28 17:13:49.336416775 +0000 UTC m=+931.385337720" lastFinishedPulling="2025-11-28 17:13:51.861590889 +0000 UTC m=+933.910511814" observedRunningTime="2025-11-28 17:13:52.385740655 +0000 UTC m=+934.434661570" watchObservedRunningTime="2025-11-28 17:13:52.39577418 +0000 UTC m=+934.444695105" Nov 28 17:13:52 crc kubenswrapper[5024]: I1128 17:13:52.760177 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-jzttp" Nov 28 17:13:52 crc kubenswrapper[5024]: I1128 17:13:52.823530 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-9pdl6" Nov 28 17:13:53 crc kubenswrapper[5024]: I1128 17:13:53.527400 5024 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 28 17:13:53 crc kubenswrapper[5024]: I1128 17:13:53.527483 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="91551520-15fb-40e8-9289-842fbcfadb7f" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 28 17:13:53 crc kubenswrapper[5024]: I1128 17:13:53.680899 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:13:53 crc kubenswrapper[5024]: I1128 17:13:53.818309 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:13:58 crc kubenswrapper[5024]: I1128 17:13:58.200994 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:58 crc kubenswrapper[5024]: I1128 17:13:58.201439 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:58 crc kubenswrapper[5024]: I1128 17:13:58.244526 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:58 crc kubenswrapper[5024]: I1128 17:13:58.457719 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:13:58 crc kubenswrapper[5024]: I1128 17:13:58.526251 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dvxlz"] Nov 28 17:14:00 crc kubenswrapper[5024]: I1128 17:14:00.426002 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dvxlz" podUID="8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" containerName="registry-server" containerID="cri-o://fe0fb21cdae6c3e173cf5f74b184ef4d719a2fc61dfc741bb233db4e7d08203a" gracePeriod=2 Nov 28 17:14:01 crc kubenswrapper[5024]: I1128 17:14:01.436050 5024 generic.go:334] "Generic (PLEG): container finished" podID="8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" containerID="fe0fb21cdae6c3e173cf5f74b184ef4d719a2fc61dfc741bb233db4e7d08203a" exitCode=0 Nov 28 17:14:01 crc kubenswrapper[5024]: I1128 17:14:01.436485 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvxlz" event={"ID":"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3","Type":"ContainerDied","Data":"fe0fb21cdae6c3e173cf5f74b184ef4d719a2fc61dfc741bb233db4e7d08203a"} Nov 28 17:14:01 crc kubenswrapper[5024]: I1128 17:14:01.779495 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:14:01 crc kubenswrapper[5024]: I1128 17:14:01.888195 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjp7r\" (UniqueName: \"kubernetes.io/projected/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-kube-api-access-vjp7r\") pod \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\" (UID: \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\") " Nov 28 17:14:01 crc kubenswrapper[5024]: I1128 17:14:01.888299 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-catalog-content\") pod \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\" (UID: \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\") " Nov 28 17:14:01 crc kubenswrapper[5024]: I1128 17:14:01.888373 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-utilities\") pod \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\" (UID: \"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3\") " Nov 28 17:14:01 crc kubenswrapper[5024]: I1128 17:14:01.889180 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-utilities" (OuterVolumeSpecName: "utilities") pod "8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" (UID: "8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:14:01 crc kubenswrapper[5024]: I1128 17:14:01.893809 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-kube-api-access-vjp7r" (OuterVolumeSpecName: "kube-api-access-vjp7r") pod "8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" (UID: "8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3"). InnerVolumeSpecName "kube-api-access-vjp7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:14:01 crc kubenswrapper[5024]: I1128 17:14:01.935655 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" (UID: "8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:14:01 crc kubenswrapper[5024]: I1128 17:14:01.990275 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjp7r\" (UniqueName: \"kubernetes.io/projected/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-kube-api-access-vjp7r\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:01 crc kubenswrapper[5024]: I1128 17:14:01.990321 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:01 crc kubenswrapper[5024]: I1128 17:14:01.990333 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:02 crc kubenswrapper[5024]: I1128 17:14:02.446918 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dvxlz" event={"ID":"8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3","Type":"ContainerDied","Data":"6fe17933f1e2fbd710a1736141a1fda0e552220ffc55210ad73fc34a366425f8"} Nov 28 17:14:02 crc kubenswrapper[5024]: I1128 17:14:02.447033 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dvxlz" Nov 28 17:14:02 crc kubenswrapper[5024]: I1128 17:14:02.447901 5024 scope.go:117] "RemoveContainer" containerID="fe0fb21cdae6c3e173cf5f74b184ef4d719a2fc61dfc741bb233db4e7d08203a" Nov 28 17:14:02 crc kubenswrapper[5024]: I1128 17:14:02.472742 5024 scope.go:117] "RemoveContainer" containerID="b2ed89e411b98bf961d6850fef02917c01eb9523e75cd74f59c3827191bc7a13" Nov 28 17:14:02 crc kubenswrapper[5024]: I1128 17:14:02.503279 5024 scope.go:117] "RemoveContainer" containerID="8e2ff02328ce8f384ca412b626ebf5447704b591359859d30098b3c01247248b" Nov 28 17:14:02 crc kubenswrapper[5024]: I1128 17:14:02.508626 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dvxlz"] Nov 28 17:14:02 crc kubenswrapper[5024]: I1128 17:14:02.511944 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dvxlz"] Nov 28 17:14:03 crc kubenswrapper[5024]: I1128 17:14:03.528600 5024 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 28 17:14:03 crc kubenswrapper[5024]: I1128 17:14:03.528674 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="91551520-15fb-40e8-9289-842fbcfadb7f" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 28 17:14:04 crc kubenswrapper[5024]: I1128 17:14:04.509090 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" path="/var/lib/kubelet/pods/8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3/volumes" Nov 28 17:14:07 crc kubenswrapper[5024]: I1128 17:14:07.565320 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:14:07 crc kubenswrapper[5024]: I1128 17:14:07.567143 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:14:13 crc kubenswrapper[5024]: I1128 17:14:13.525081 5024 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 28 17:14:13 crc kubenswrapper[5024]: I1128 17:14:13.525596 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="91551520-15fb-40e8-9289-842fbcfadb7f" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 28 17:14:23 crc kubenswrapper[5024]: I1128 17:14:23.527116 5024 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 28 17:14:23 crc kubenswrapper[5024]: I1128 17:14:23.527654 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="91551520-15fb-40e8-9289-842fbcfadb7f" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 28 17:14:33 crc kubenswrapper[5024]: I1128 17:14:33.531204 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:14:37 crc kubenswrapper[5024]: I1128 17:14:37.565643 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:14:37 crc kubenswrapper[5024]: I1128 17:14:37.566570 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.740619 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-k62cq"] Nov 28 17:14:52 crc kubenswrapper[5024]: E1128 17:14:52.741569 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" containerName="registry-server" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.741590 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" containerName="registry-server" Nov 28 17:14:52 crc kubenswrapper[5024]: E1128 17:14:52.741608 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" containerName="extract-content" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.741616 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" containerName="extract-content" Nov 28 17:14:52 crc kubenswrapper[5024]: E1128 17:14:52.741644 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" containerName="extract-utilities" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.741656 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" containerName="extract-utilities" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.741829 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e28c2f6-d7ef-4d61-a78a-c4c76b4aa2d3" containerName="registry-server" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.742504 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.746614 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.747179 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.747409 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.747794 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-qxtth" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.750078 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.751881 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.850653 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-trusted-ca\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.850740 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-metrics\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.850778 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9grr\" (UniqueName: \"kubernetes.io/projected/66d2375f-2d47-48c3-a02a-6b11f5069e57-kube-api-access-b9grr\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.850844 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-collector-token\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.850896 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/66d2375f-2d47-48c3-a02a-6b11f5069e57-datadir\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.850928 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-entrypoint\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.850960 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-config\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.850242 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-k62cq"] Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.852070 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/66d2375f-2d47-48c3-a02a-6b11f5069e57-tmp\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.852140 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-config-openshift-service-cacrt\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.852195 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-collector-syslog-receiver\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.852227 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/66d2375f-2d47-48c3-a02a-6b11f5069e57-sa-token\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.867769 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-k62cq"] Nov 28 17:14:52 crc kubenswrapper[5024]: E1128 17:14:52.868679 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-b9grr metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-k62cq" podUID="66d2375f-2d47-48c3-a02a-6b11f5069e57" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.954165 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-metrics\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.954214 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9grr\" (UniqueName: \"kubernetes.io/projected/66d2375f-2d47-48c3-a02a-6b11f5069e57-kube-api-access-b9grr\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.954241 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-collector-token\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.954272 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/66d2375f-2d47-48c3-a02a-6b11f5069e57-datadir\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.954296 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-entrypoint\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.954321 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-config\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.954366 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/66d2375f-2d47-48c3-a02a-6b11f5069e57-tmp\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.954372 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/66d2375f-2d47-48c3-a02a-6b11f5069e57-datadir\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.954386 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-config-openshift-service-cacrt\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.954523 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-collector-syslog-receiver\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.954560 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/66d2375f-2d47-48c3-a02a-6b11f5069e57-sa-token\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.954633 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-trusted-ca\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.955209 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-config\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.955223 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-config-openshift-service-cacrt\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.955370 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-entrypoint\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.955590 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-trusted-ca\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.960633 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-metrics\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.960840 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/66d2375f-2d47-48c3-a02a-6b11f5069e57-tmp\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.962171 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-collector-syslog-receiver\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.962769 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-collector-token\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.971906 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/66d2375f-2d47-48c3-a02a-6b11f5069e57-sa-token\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:52 crc kubenswrapper[5024]: I1128 17:14:52.974191 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9grr\" (UniqueName: \"kubernetes.io/projected/66d2375f-2d47-48c3-a02a-6b11f5069e57-kube-api-access-b9grr\") pod \"collector-k62cq\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " pod="openshift-logging/collector-k62cq" Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.852050 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-k62cq" Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.865797 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-k62cq" Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.971681 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/66d2375f-2d47-48c3-a02a-6b11f5069e57-tmp\") pod \"66d2375f-2d47-48c3-a02a-6b11f5069e57\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.972139 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-collector-token\") pod \"66d2375f-2d47-48c3-a02a-6b11f5069e57\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.972401 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-entrypoint\") pod \"66d2375f-2d47-48c3-a02a-6b11f5069e57\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.972523 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-collector-syslog-receiver\") pod \"66d2375f-2d47-48c3-a02a-6b11f5069e57\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.972668 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9grr\" (UniqueName: \"kubernetes.io/projected/66d2375f-2d47-48c3-a02a-6b11f5069e57-kube-api-access-b9grr\") pod \"66d2375f-2d47-48c3-a02a-6b11f5069e57\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.972849 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-metrics\") pod \"66d2375f-2d47-48c3-a02a-6b11f5069e57\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.973078 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-config\") pod \"66d2375f-2d47-48c3-a02a-6b11f5069e57\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.973251 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-config-openshift-service-cacrt\") pod \"66d2375f-2d47-48c3-a02a-6b11f5069e57\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.973362 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/66d2375f-2d47-48c3-a02a-6b11f5069e57-sa-token\") pod \"66d2375f-2d47-48c3-a02a-6b11f5069e57\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.973549 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/66d2375f-2d47-48c3-a02a-6b11f5069e57-datadir\") pod \"66d2375f-2d47-48c3-a02a-6b11f5069e57\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.973754 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-trusted-ca\") pod \"66d2375f-2d47-48c3-a02a-6b11f5069e57\" (UID: \"66d2375f-2d47-48c3-a02a-6b11f5069e57\") " Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.976456 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66d2375f-2d47-48c3-a02a-6b11f5069e57-datadir" (OuterVolumeSpecName: "datadir") pod "66d2375f-2d47-48c3-a02a-6b11f5069e57" (UID: "66d2375f-2d47-48c3-a02a-6b11f5069e57"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.976779 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "66d2375f-2d47-48c3-a02a-6b11f5069e57" (UID: "66d2375f-2d47-48c3-a02a-6b11f5069e57"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.977857 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-config" (OuterVolumeSpecName: "config") pod "66d2375f-2d47-48c3-a02a-6b11f5069e57" (UID: "66d2375f-2d47-48c3-a02a-6b11f5069e57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.977933 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "66d2375f-2d47-48c3-a02a-6b11f5069e57" (UID: "66d2375f-2d47-48c3-a02a-6b11f5069e57"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.978777 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "66d2375f-2d47-48c3-a02a-6b11f5069e57" (UID: "66d2375f-2d47-48c3-a02a-6b11f5069e57"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.984223 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-metrics" (OuterVolumeSpecName: "metrics") pod "66d2375f-2d47-48c3-a02a-6b11f5069e57" (UID: "66d2375f-2d47-48c3-a02a-6b11f5069e57"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.984242 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-collector-token" (OuterVolumeSpecName: "collector-token") pod "66d2375f-2d47-48c3-a02a-6b11f5069e57" (UID: "66d2375f-2d47-48c3-a02a-6b11f5069e57"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.985416 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66d2375f-2d47-48c3-a02a-6b11f5069e57-kube-api-access-b9grr" (OuterVolumeSpecName: "kube-api-access-b9grr") pod "66d2375f-2d47-48c3-a02a-6b11f5069e57" (UID: "66d2375f-2d47-48c3-a02a-6b11f5069e57"). InnerVolumeSpecName "kube-api-access-b9grr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.986130 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66d2375f-2d47-48c3-a02a-6b11f5069e57-tmp" (OuterVolumeSpecName: "tmp") pod "66d2375f-2d47-48c3-a02a-6b11f5069e57" (UID: "66d2375f-2d47-48c3-a02a-6b11f5069e57"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.986260 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "66d2375f-2d47-48c3-a02a-6b11f5069e57" (UID: "66d2375f-2d47-48c3-a02a-6b11f5069e57"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:14:53 crc kubenswrapper[5024]: I1128 17:14:53.987772 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66d2375f-2d47-48c3-a02a-6b11f5069e57-sa-token" (OuterVolumeSpecName: "sa-token") pod "66d2375f-2d47-48c3-a02a-6b11f5069e57" (UID: "66d2375f-2d47-48c3-a02a-6b11f5069e57"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.098926 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.098978 5024 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.098994 5024 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/66d2375f-2d47-48c3-a02a-6b11f5069e57-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.099007 5024 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/66d2375f-2d47-48c3-a02a-6b11f5069e57-datadir\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.099035 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.099045 5024 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/66d2375f-2d47-48c3-a02a-6b11f5069e57-tmp\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.100731 5024 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-collector-token\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.100777 5024 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/66d2375f-2d47-48c3-a02a-6b11f5069e57-entrypoint\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.100794 5024 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.100808 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9grr\" (UniqueName: \"kubernetes.io/projected/66d2375f-2d47-48c3-a02a-6b11f5069e57-kube-api-access-b9grr\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.100861 5024 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/66d2375f-2d47-48c3-a02a-6b11f5069e57-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.857913 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-k62cq" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.897391 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-k62cq"] Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.908277 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-k62cq"] Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.915869 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-4f7qn"] Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.917169 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4f7qn" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.922151 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.923069 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-qxtth" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.923378 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.923420 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.923763 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.938450 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-4f7qn"] Nov 28 17:14:54 crc kubenswrapper[5024]: I1128 17:14:54.952760 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.016541 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-collector-syslog-receiver\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.016630 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-trusted-ca\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.016678 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-sa-token\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.016839 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-datadir\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.016938 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-config-openshift-service-cacrt\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.017108 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-config\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.017162 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-collector-token\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.017217 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b256r\" (UniqueName: \"kubernetes.io/projected/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-kube-api-access-b256r\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.017285 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-metrics\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.017336 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-entrypoint\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.017385 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-tmp\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.120076 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-tmp\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.120158 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-collector-syslog-receiver\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.120183 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-trusted-ca\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.120207 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-sa-token\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.120227 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-datadir\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.120250 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-config-openshift-service-cacrt\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.120283 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-config\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.120303 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-collector-token\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.120336 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b256r\" (UniqueName: \"kubernetes.io/projected/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-kube-api-access-b256r\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.120347 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-datadir\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.120361 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-metrics\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.120431 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-entrypoint\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.121107 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-trusted-ca\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.121146 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-entrypoint\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.122249 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-config-openshift-service-cacrt\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.122417 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-config\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.125567 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-metrics\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.125674 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-collector-syslog-receiver\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.126511 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-collector-token\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.126559 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-tmp\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.145570 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b256r\" (UniqueName: \"kubernetes.io/projected/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-kube-api-access-b256r\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.145911 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/c0fc3afa-db9d-4db8-9d2b-acf321068b1e-sa-token\") pod \"collector-4f7qn\" (UID: \"c0fc3afa-db9d-4db8-9d2b-acf321068b1e\") " pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.235749 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4f7qn" Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.656436 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-4f7qn"] Nov 28 17:14:55 crc kubenswrapper[5024]: I1128 17:14:55.865987 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-4f7qn" event={"ID":"c0fc3afa-db9d-4db8-9d2b-acf321068b1e","Type":"ContainerStarted","Data":"422b947f29137a496e885c5fd09ec54a64b156682640a25709e8f1cb542f8a70"} Nov 28 17:14:56 crc kubenswrapper[5024]: I1128 17:14:56.506065 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66d2375f-2d47-48c3-a02a-6b11f5069e57" path="/var/lib/kubelet/pods/66d2375f-2d47-48c3-a02a-6b11f5069e57/volumes" Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.030843 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r9wl6"] Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.032681 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.046562 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r9wl6"] Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.156987 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqblf\" (UniqueName: \"kubernetes.io/projected/3f737711-ef09-4470-a6de-f50a6fa1fa76-kube-api-access-wqblf\") pod \"redhat-marketplace-r9wl6\" (UID: \"3f737711-ef09-4470-a6de-f50a6fa1fa76\") " pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.157076 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f737711-ef09-4470-a6de-f50a6fa1fa76-catalog-content\") pod \"redhat-marketplace-r9wl6\" (UID: \"3f737711-ef09-4470-a6de-f50a6fa1fa76\") " pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.157105 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f737711-ef09-4470-a6de-f50a6fa1fa76-utilities\") pod \"redhat-marketplace-r9wl6\" (UID: \"3f737711-ef09-4470-a6de-f50a6fa1fa76\") " pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.258863 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f737711-ef09-4470-a6de-f50a6fa1fa76-utilities\") pod \"redhat-marketplace-r9wl6\" (UID: \"3f737711-ef09-4470-a6de-f50a6fa1fa76\") " pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.259437 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqblf\" (UniqueName: \"kubernetes.io/projected/3f737711-ef09-4470-a6de-f50a6fa1fa76-kube-api-access-wqblf\") pod \"redhat-marketplace-r9wl6\" (UID: \"3f737711-ef09-4470-a6de-f50a6fa1fa76\") " pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.259446 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f737711-ef09-4470-a6de-f50a6fa1fa76-utilities\") pod \"redhat-marketplace-r9wl6\" (UID: \"3f737711-ef09-4470-a6de-f50a6fa1fa76\") " pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.259742 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f737711-ef09-4470-a6de-f50a6fa1fa76-catalog-content\") pod \"redhat-marketplace-r9wl6\" (UID: \"3f737711-ef09-4470-a6de-f50a6fa1fa76\") " pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.260148 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f737711-ef09-4470-a6de-f50a6fa1fa76-catalog-content\") pod \"redhat-marketplace-r9wl6\" (UID: \"3f737711-ef09-4470-a6de-f50a6fa1fa76\") " pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.281146 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqblf\" (UniqueName: \"kubernetes.io/projected/3f737711-ef09-4470-a6de-f50a6fa1fa76-kube-api-access-wqblf\") pod \"redhat-marketplace-r9wl6\" (UID: \"3f737711-ef09-4470-a6de-f50a6fa1fa76\") " pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.361479 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.607614 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r9wl6"] Nov 28 17:14:57 crc kubenswrapper[5024]: W1128 17:14:57.612910 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f737711_ef09_4470_a6de_f50a6fa1fa76.slice/crio-53051f2bc06cb6e095fb66069702da2facc6c7937223a3a7e0f22844f300a4bf WatchSource:0}: Error finding container 53051f2bc06cb6e095fb66069702da2facc6c7937223a3a7e0f22844f300a4bf: Status 404 returned error can't find the container with id 53051f2bc06cb6e095fb66069702da2facc6c7937223a3a7e0f22844f300a4bf Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.892807 5024 generic.go:334] "Generic (PLEG): container finished" podID="3f737711-ef09-4470-a6de-f50a6fa1fa76" containerID="7df3942d242f00cdc45bf58061f81d5aaf8ab903e92c43e787fdfaf5d1c42c42" exitCode=0 Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.892920 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r9wl6" event={"ID":"3f737711-ef09-4470-a6de-f50a6fa1fa76","Type":"ContainerDied","Data":"7df3942d242f00cdc45bf58061f81d5aaf8ab903e92c43e787fdfaf5d1c42c42"} Nov 28 17:14:57 crc kubenswrapper[5024]: I1128 17:14:57.893230 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r9wl6" event={"ID":"3f737711-ef09-4470-a6de-f50a6fa1fa76","Type":"ContainerStarted","Data":"53051f2bc06cb6e095fb66069702da2facc6c7937223a3a7e0f22844f300a4bf"} Nov 28 17:14:59 crc kubenswrapper[5024]: I1128 17:14:59.913694 5024 generic.go:334] "Generic (PLEG): container finished" podID="3f737711-ef09-4470-a6de-f50a6fa1fa76" containerID="2a35f32a7435adcf0fcd36a1631ba021b5c6c0bc2b8bfa6d100c69a000a8ad8b" exitCode=0 Nov 28 17:14:59 crc kubenswrapper[5024]: I1128 17:14:59.913952 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r9wl6" event={"ID":"3f737711-ef09-4470-a6de-f50a6fa1fa76","Type":"ContainerDied","Data":"2a35f32a7435adcf0fcd36a1631ba021b5c6c0bc2b8bfa6d100c69a000a8ad8b"} Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.157262 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8"] Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.158614 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.163623 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.163828 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.183859 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8"] Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.216103 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78b6b2cf-174e-47d6-8532-b7cff728a185-secret-volume\") pod \"collect-profiles-29405835-hz5m8\" (UID: \"78b6b2cf-174e-47d6-8532-b7cff728a185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.216171 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gprd5\" (UniqueName: \"kubernetes.io/projected/78b6b2cf-174e-47d6-8532-b7cff728a185-kube-api-access-gprd5\") pod \"collect-profiles-29405835-hz5m8\" (UID: \"78b6b2cf-174e-47d6-8532-b7cff728a185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.216204 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78b6b2cf-174e-47d6-8532-b7cff728a185-config-volume\") pod \"collect-profiles-29405835-hz5m8\" (UID: \"78b6b2cf-174e-47d6-8532-b7cff728a185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.317747 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78b6b2cf-174e-47d6-8532-b7cff728a185-secret-volume\") pod \"collect-profiles-29405835-hz5m8\" (UID: \"78b6b2cf-174e-47d6-8532-b7cff728a185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.317846 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gprd5\" (UniqueName: \"kubernetes.io/projected/78b6b2cf-174e-47d6-8532-b7cff728a185-kube-api-access-gprd5\") pod \"collect-profiles-29405835-hz5m8\" (UID: \"78b6b2cf-174e-47d6-8532-b7cff728a185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.317887 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78b6b2cf-174e-47d6-8532-b7cff728a185-config-volume\") pod \"collect-profiles-29405835-hz5m8\" (UID: \"78b6b2cf-174e-47d6-8532-b7cff728a185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.319601 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78b6b2cf-174e-47d6-8532-b7cff728a185-config-volume\") pod \"collect-profiles-29405835-hz5m8\" (UID: \"78b6b2cf-174e-47d6-8532-b7cff728a185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.326298 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78b6b2cf-174e-47d6-8532-b7cff728a185-secret-volume\") pod \"collect-profiles-29405835-hz5m8\" (UID: \"78b6b2cf-174e-47d6-8532-b7cff728a185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.336633 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gprd5\" (UniqueName: \"kubernetes.io/projected/78b6b2cf-174e-47d6-8532-b7cff728a185-kube-api-access-gprd5\") pod \"collect-profiles-29405835-hz5m8\" (UID: \"78b6b2cf-174e-47d6-8532-b7cff728a185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" Nov 28 17:15:00 crc kubenswrapper[5024]: I1128 17:15:00.487773 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" Nov 28 17:15:03 crc kubenswrapper[5024]: I1128 17:15:03.093858 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kw68p"] Nov 28 17:15:03 crc kubenswrapper[5024]: I1128 17:15:03.095960 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:03 crc kubenswrapper[5024]: I1128 17:15:03.108487 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kw68p"] Nov 28 17:15:03 crc kubenswrapper[5024]: I1128 17:15:03.167522 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a252b3e-0953-48ed-9aad-a7630867286d-catalog-content\") pod \"certified-operators-kw68p\" (UID: \"9a252b3e-0953-48ed-9aad-a7630867286d\") " pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:03 crc kubenswrapper[5024]: I1128 17:15:03.167633 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt56p\" (UniqueName: \"kubernetes.io/projected/9a252b3e-0953-48ed-9aad-a7630867286d-kube-api-access-kt56p\") pod \"certified-operators-kw68p\" (UID: \"9a252b3e-0953-48ed-9aad-a7630867286d\") " pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:03 crc kubenswrapper[5024]: I1128 17:15:03.167721 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a252b3e-0953-48ed-9aad-a7630867286d-utilities\") pod \"certified-operators-kw68p\" (UID: \"9a252b3e-0953-48ed-9aad-a7630867286d\") " pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:03 crc kubenswrapper[5024]: I1128 17:15:03.269836 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a252b3e-0953-48ed-9aad-a7630867286d-catalog-content\") pod \"certified-operators-kw68p\" (UID: \"9a252b3e-0953-48ed-9aad-a7630867286d\") " pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:03 crc kubenswrapper[5024]: I1128 17:15:03.269927 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt56p\" (UniqueName: \"kubernetes.io/projected/9a252b3e-0953-48ed-9aad-a7630867286d-kube-api-access-kt56p\") pod \"certified-operators-kw68p\" (UID: \"9a252b3e-0953-48ed-9aad-a7630867286d\") " pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:03 crc kubenswrapper[5024]: I1128 17:15:03.269987 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a252b3e-0953-48ed-9aad-a7630867286d-utilities\") pod \"certified-operators-kw68p\" (UID: \"9a252b3e-0953-48ed-9aad-a7630867286d\") " pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:03 crc kubenswrapper[5024]: I1128 17:15:03.270463 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a252b3e-0953-48ed-9aad-a7630867286d-catalog-content\") pod \"certified-operators-kw68p\" (UID: \"9a252b3e-0953-48ed-9aad-a7630867286d\") " pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:03 crc kubenswrapper[5024]: I1128 17:15:03.270529 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a252b3e-0953-48ed-9aad-a7630867286d-utilities\") pod \"certified-operators-kw68p\" (UID: \"9a252b3e-0953-48ed-9aad-a7630867286d\") " pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:03 crc kubenswrapper[5024]: I1128 17:15:03.302412 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt56p\" (UniqueName: \"kubernetes.io/projected/9a252b3e-0953-48ed-9aad-a7630867286d-kube-api-access-kt56p\") pod \"certified-operators-kw68p\" (UID: \"9a252b3e-0953-48ed-9aad-a7630867286d\") " pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:03 crc kubenswrapper[5024]: I1128 17:15:03.433561 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:04 crc kubenswrapper[5024]: I1128 17:15:04.472557 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8"] Nov 28 17:15:04 crc kubenswrapper[5024]: I1128 17:15:04.573337 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kw68p"] Nov 28 17:15:04 crc kubenswrapper[5024]: W1128 17:15:04.590375 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a252b3e_0953_48ed_9aad_a7630867286d.slice/crio-3e27a1127c80e2ceedbdc7e137fafff012c027516710ba56ff5a5888be381dff WatchSource:0}: Error finding container 3e27a1127c80e2ceedbdc7e137fafff012c027516710ba56ff5a5888be381dff: Status 404 returned error can't find the container with id 3e27a1127c80e2ceedbdc7e137fafff012c027516710ba56ff5a5888be381dff Nov 28 17:15:04 crc kubenswrapper[5024]: I1128 17:15:04.953130 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-4f7qn" event={"ID":"c0fc3afa-db9d-4db8-9d2b-acf321068b1e","Type":"ContainerStarted","Data":"f1aebd432bdd8888e4bb47f06f22605a946cbb2936cf515de80fb00c77ead7de"} Nov 28 17:15:04 crc kubenswrapper[5024]: I1128 17:15:04.954604 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" event={"ID":"78b6b2cf-174e-47d6-8532-b7cff728a185","Type":"ContainerStarted","Data":"d1eb27dbbb8813f7b95a9e70f4a44d3007d749f0a1dd58d00ac1b20c4dcce34a"} Nov 28 17:15:04 crc kubenswrapper[5024]: I1128 17:15:04.954634 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" event={"ID":"78b6b2cf-174e-47d6-8532-b7cff728a185","Type":"ContainerStarted","Data":"8559c2b4e05c91a32c12584345a3f5ca81bd951b6347a346e4ded594a6412c74"} Nov 28 17:15:04 crc kubenswrapper[5024]: I1128 17:15:04.957007 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r9wl6" event={"ID":"3f737711-ef09-4470-a6de-f50a6fa1fa76","Type":"ContainerStarted","Data":"b8a7dd9ac6f1b3887362ccd3ed1b5cb28336f11fa909453961ba52a70b68a7ff"} Nov 28 17:15:04 crc kubenswrapper[5024]: I1128 17:15:04.958585 5024 generic.go:334] "Generic (PLEG): container finished" podID="9a252b3e-0953-48ed-9aad-a7630867286d" containerID="6c2ddc80b8bfc17de513c0a7659dd838561d7d40949e67138673c747efc73c9d" exitCode=0 Nov 28 17:15:04 crc kubenswrapper[5024]: I1128 17:15:04.958619 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kw68p" event={"ID":"9a252b3e-0953-48ed-9aad-a7630867286d","Type":"ContainerDied","Data":"6c2ddc80b8bfc17de513c0a7659dd838561d7d40949e67138673c747efc73c9d"} Nov 28 17:15:04 crc kubenswrapper[5024]: I1128 17:15:04.958637 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kw68p" event={"ID":"9a252b3e-0953-48ed-9aad-a7630867286d","Type":"ContainerStarted","Data":"3e27a1127c80e2ceedbdc7e137fafff012c027516710ba56ff5a5888be381dff"} Nov 28 17:15:05 crc kubenswrapper[5024]: I1128 17:15:05.010262 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-4f7qn" podStartSLOduration=2.603539597 podStartE2EDuration="11.010245213s" podCreationTimestamp="2025-11-28 17:14:54 +0000 UTC" firstStartedPulling="2025-11-28 17:14:55.669738179 +0000 UTC m=+997.718659074" lastFinishedPulling="2025-11-28 17:15:04.076443785 +0000 UTC m=+1006.125364690" observedRunningTime="2025-11-28 17:15:05.003531965 +0000 UTC m=+1007.052452870" watchObservedRunningTime="2025-11-28 17:15:05.010245213 +0000 UTC m=+1007.059166118" Nov 28 17:15:05 crc kubenswrapper[5024]: I1128 17:15:05.046530 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r9wl6" podStartSLOduration=1.891456488 podStartE2EDuration="8.046511341s" podCreationTimestamp="2025-11-28 17:14:57 +0000 UTC" firstStartedPulling="2025-11-28 17:14:57.895298153 +0000 UTC m=+999.944219058" lastFinishedPulling="2025-11-28 17:15:04.050353006 +0000 UTC m=+1006.099273911" observedRunningTime="2025-11-28 17:15:05.030127229 +0000 UTC m=+1007.079048134" watchObservedRunningTime="2025-11-28 17:15:05.046511341 +0000 UTC m=+1007.095432246" Nov 28 17:15:05 crc kubenswrapper[5024]: I1128 17:15:05.064342 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" podStartSLOduration=5.064326426 podStartE2EDuration="5.064326426s" podCreationTimestamp="2025-11-28 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:15:05.059520895 +0000 UTC m=+1007.108441800" watchObservedRunningTime="2025-11-28 17:15:05.064326426 +0000 UTC m=+1007.113247331" Nov 28 17:15:05 crc kubenswrapper[5024]: I1128 17:15:05.969482 5024 generic.go:334] "Generic (PLEG): container finished" podID="78b6b2cf-174e-47d6-8532-b7cff728a185" containerID="d1eb27dbbb8813f7b95a9e70f4a44d3007d749f0a1dd58d00ac1b20c4dcce34a" exitCode=0 Nov 28 17:15:05 crc kubenswrapper[5024]: I1128 17:15:05.969923 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" event={"ID":"78b6b2cf-174e-47d6-8532-b7cff728a185","Type":"ContainerDied","Data":"d1eb27dbbb8813f7b95a9e70f4a44d3007d749f0a1dd58d00ac1b20c4dcce34a"} Nov 28 17:15:05 crc kubenswrapper[5024]: I1128 17:15:05.974507 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kw68p" event={"ID":"9a252b3e-0953-48ed-9aad-a7630867286d","Type":"ContainerStarted","Data":"065721aea6c4204424a929ee3db635fc329fed5f1d79a2253c21aadb40e66d25"} Nov 28 17:15:06 crc kubenswrapper[5024]: I1128 17:15:06.984463 5024 generic.go:334] "Generic (PLEG): container finished" podID="9a252b3e-0953-48ed-9aad-a7630867286d" containerID="065721aea6c4204424a929ee3db635fc329fed5f1d79a2253c21aadb40e66d25" exitCode=0 Nov 28 17:15:06 crc kubenswrapper[5024]: I1128 17:15:06.984531 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kw68p" event={"ID":"9a252b3e-0953-48ed-9aad-a7630867286d","Type":"ContainerDied","Data":"065721aea6c4204424a929ee3db635fc329fed5f1d79a2253c21aadb40e66d25"} Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.256761 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.354390 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78b6b2cf-174e-47d6-8532-b7cff728a185-secret-volume\") pod \"78b6b2cf-174e-47d6-8532-b7cff728a185\" (UID: \"78b6b2cf-174e-47d6-8532-b7cff728a185\") " Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.354704 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78b6b2cf-174e-47d6-8532-b7cff728a185-config-volume\") pod \"78b6b2cf-174e-47d6-8532-b7cff728a185\" (UID: \"78b6b2cf-174e-47d6-8532-b7cff728a185\") " Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.354767 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gprd5\" (UniqueName: \"kubernetes.io/projected/78b6b2cf-174e-47d6-8532-b7cff728a185-kube-api-access-gprd5\") pod \"78b6b2cf-174e-47d6-8532-b7cff728a185\" (UID: \"78b6b2cf-174e-47d6-8532-b7cff728a185\") " Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.355479 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78b6b2cf-174e-47d6-8532-b7cff728a185-config-volume" (OuterVolumeSpecName: "config-volume") pod "78b6b2cf-174e-47d6-8532-b7cff728a185" (UID: "78b6b2cf-174e-47d6-8532-b7cff728a185"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.359906 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78b6b2cf-174e-47d6-8532-b7cff728a185-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "78b6b2cf-174e-47d6-8532-b7cff728a185" (UID: "78b6b2cf-174e-47d6-8532-b7cff728a185"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.360791 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78b6b2cf-174e-47d6-8532-b7cff728a185-kube-api-access-gprd5" (OuterVolumeSpecName: "kube-api-access-gprd5") pod "78b6b2cf-174e-47d6-8532-b7cff728a185" (UID: "78b6b2cf-174e-47d6-8532-b7cff728a185"). InnerVolumeSpecName "kube-api-access-gprd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.363204 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.364786 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.417721 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.457287 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78b6b2cf-174e-47d6-8532-b7cff728a185-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.457324 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gprd5\" (UniqueName: \"kubernetes.io/projected/78b6b2cf-174e-47d6-8532-b7cff728a185-kube-api-access-gprd5\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.457334 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78b6b2cf-174e-47d6-8532-b7cff728a185-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.565224 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.565298 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.565341 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.566171 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88f26a0a596a708c394834d35e939b4bff9c97e9c07da03ec569d30bef11bf70"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.566243 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://88f26a0a596a708c394834d35e939b4bff9c97e9c07da03ec569d30bef11bf70" gracePeriod=600 Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.995810 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="88f26a0a596a708c394834d35e939b4bff9c97e9c07da03ec569d30bef11bf70" exitCode=0 Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.995882 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"88f26a0a596a708c394834d35e939b4bff9c97e9c07da03ec569d30bef11bf70"} Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.996483 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"7d8f6a9c6d8434b82d8868ca2c29dd5353de86fc7a1c9949e65b4d17fd395785"} Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.996529 5024 scope.go:117] "RemoveContainer" containerID="b519f9b78edbf9b228fc85037669f9ab174eddbe4b594ce06b779c1bf0c5cf3c" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.998044 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.998060 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8" event={"ID":"78b6b2cf-174e-47d6-8532-b7cff728a185","Type":"ContainerDied","Data":"8559c2b4e05c91a32c12584345a3f5ca81bd951b6347a346e4ded594a6412c74"} Nov 28 17:15:07 crc kubenswrapper[5024]: I1128 17:15:07.998102 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8559c2b4e05c91a32c12584345a3f5ca81bd951b6347a346e4ded594a6412c74" Nov 28 17:15:08 crc kubenswrapper[5024]: I1128 17:15:08.001131 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kw68p" event={"ID":"9a252b3e-0953-48ed-9aad-a7630867286d","Type":"ContainerStarted","Data":"e45ea952c4a520e0851813b6f5564ccc7695da801ea8cb2dca81e5b9646ebe3c"} Nov 28 17:15:08 crc kubenswrapper[5024]: I1128 17:15:08.039165 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kw68p" podStartSLOduration=2.229453293 podStartE2EDuration="5.039147792s" podCreationTimestamp="2025-11-28 17:15:03 +0000 UTC" firstStartedPulling="2025-11-28 17:15:04.959982372 +0000 UTC m=+1007.008903277" lastFinishedPulling="2025-11-28 17:15:07.769676871 +0000 UTC m=+1009.818597776" observedRunningTime="2025-11-28 17:15:08.034935478 +0000 UTC m=+1010.083856383" watchObservedRunningTime="2025-11-28 17:15:08.039147792 +0000 UTC m=+1010.088068697" Nov 28 17:15:09 crc kubenswrapper[5024]: I1128 17:15:09.081054 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:15:09 crc kubenswrapper[5024]: I1128 17:15:09.674917 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r9wl6"] Nov 28 17:15:11 crc kubenswrapper[5024]: I1128 17:15:11.030365 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r9wl6" podUID="3f737711-ef09-4470-a6de-f50a6fa1fa76" containerName="registry-server" containerID="cri-o://b8a7dd9ac6f1b3887362ccd3ed1b5cb28336f11fa909453961ba52a70b68a7ff" gracePeriod=2 Nov 28 17:15:12 crc kubenswrapper[5024]: I1128 17:15:12.052467 5024 generic.go:334] "Generic (PLEG): container finished" podID="3f737711-ef09-4470-a6de-f50a6fa1fa76" containerID="b8a7dd9ac6f1b3887362ccd3ed1b5cb28336f11fa909453961ba52a70b68a7ff" exitCode=0 Nov 28 17:15:12 crc kubenswrapper[5024]: I1128 17:15:12.052565 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r9wl6" event={"ID":"3f737711-ef09-4470-a6de-f50a6fa1fa76","Type":"ContainerDied","Data":"b8a7dd9ac6f1b3887362ccd3ed1b5cb28336f11fa909453961ba52a70b68a7ff"} Nov 28 17:15:12 crc kubenswrapper[5024]: I1128 17:15:12.583325 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:15:12 crc kubenswrapper[5024]: I1128 17:15:12.664369 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f737711-ef09-4470-a6de-f50a6fa1fa76-catalog-content\") pod \"3f737711-ef09-4470-a6de-f50a6fa1fa76\" (UID: \"3f737711-ef09-4470-a6de-f50a6fa1fa76\") " Nov 28 17:15:12 crc kubenswrapper[5024]: I1128 17:15:12.664445 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqblf\" (UniqueName: \"kubernetes.io/projected/3f737711-ef09-4470-a6de-f50a6fa1fa76-kube-api-access-wqblf\") pod \"3f737711-ef09-4470-a6de-f50a6fa1fa76\" (UID: \"3f737711-ef09-4470-a6de-f50a6fa1fa76\") " Nov 28 17:15:12 crc kubenswrapper[5024]: I1128 17:15:12.664522 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f737711-ef09-4470-a6de-f50a6fa1fa76-utilities\") pod \"3f737711-ef09-4470-a6de-f50a6fa1fa76\" (UID: \"3f737711-ef09-4470-a6de-f50a6fa1fa76\") " Nov 28 17:15:12 crc kubenswrapper[5024]: I1128 17:15:12.665661 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f737711-ef09-4470-a6de-f50a6fa1fa76-utilities" (OuterVolumeSpecName: "utilities") pod "3f737711-ef09-4470-a6de-f50a6fa1fa76" (UID: "3f737711-ef09-4470-a6de-f50a6fa1fa76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:15:12 crc kubenswrapper[5024]: I1128 17:15:12.672098 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f737711-ef09-4470-a6de-f50a6fa1fa76-kube-api-access-wqblf" (OuterVolumeSpecName: "kube-api-access-wqblf") pod "3f737711-ef09-4470-a6de-f50a6fa1fa76" (UID: "3f737711-ef09-4470-a6de-f50a6fa1fa76"). InnerVolumeSpecName "kube-api-access-wqblf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:15:12 crc kubenswrapper[5024]: I1128 17:15:12.685522 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f737711-ef09-4470-a6de-f50a6fa1fa76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f737711-ef09-4470-a6de-f50a6fa1fa76" (UID: "3f737711-ef09-4470-a6de-f50a6fa1fa76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:15:12 crc kubenswrapper[5024]: I1128 17:15:12.766664 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqblf\" (UniqueName: \"kubernetes.io/projected/3f737711-ef09-4470-a6de-f50a6fa1fa76-kube-api-access-wqblf\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:12 crc kubenswrapper[5024]: I1128 17:15:12.766707 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f737711-ef09-4470-a6de-f50a6fa1fa76-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:12 crc kubenswrapper[5024]: I1128 17:15:12.766721 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f737711-ef09-4470-a6de-f50a6fa1fa76-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:13 crc kubenswrapper[5024]: I1128 17:15:13.061147 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r9wl6" event={"ID":"3f737711-ef09-4470-a6de-f50a6fa1fa76","Type":"ContainerDied","Data":"53051f2bc06cb6e095fb66069702da2facc6c7937223a3a7e0f22844f300a4bf"} Nov 28 17:15:13 crc kubenswrapper[5024]: I1128 17:15:13.061210 5024 scope.go:117] "RemoveContainer" containerID="b8a7dd9ac6f1b3887362ccd3ed1b5cb28336f11fa909453961ba52a70b68a7ff" Nov 28 17:15:13 crc kubenswrapper[5024]: I1128 17:15:13.061343 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r9wl6" Nov 28 17:15:13 crc kubenswrapper[5024]: I1128 17:15:13.087597 5024 scope.go:117] "RemoveContainer" containerID="2a35f32a7435adcf0fcd36a1631ba021b5c6c0bc2b8bfa6d100c69a000a8ad8b" Nov 28 17:15:13 crc kubenswrapper[5024]: I1128 17:15:13.094741 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r9wl6"] Nov 28 17:15:13 crc kubenswrapper[5024]: I1128 17:15:13.101406 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r9wl6"] Nov 28 17:15:13 crc kubenswrapper[5024]: I1128 17:15:13.114094 5024 scope.go:117] "RemoveContainer" containerID="7df3942d242f00cdc45bf58061f81d5aaf8ab903e92c43e787fdfaf5d1c42c42" Nov 28 17:15:13 crc kubenswrapper[5024]: I1128 17:15:13.433754 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:13 crc kubenswrapper[5024]: I1128 17:15:13.433793 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:13 crc kubenswrapper[5024]: I1128 17:15:13.473842 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:14 crc kubenswrapper[5024]: I1128 17:15:14.112832 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:14 crc kubenswrapper[5024]: I1128 17:15:14.506989 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f737711-ef09-4470-a6de-f50a6fa1fa76" path="/var/lib/kubelet/pods/3f737711-ef09-4470-a6de-f50a6fa1fa76/volumes" Nov 28 17:15:15 crc kubenswrapper[5024]: I1128 17:15:15.873529 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kw68p"] Nov 28 17:15:16 crc kubenswrapper[5024]: I1128 17:15:16.083419 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kw68p" podUID="9a252b3e-0953-48ed-9aad-a7630867286d" containerName="registry-server" containerID="cri-o://e45ea952c4a520e0851813b6f5564ccc7695da801ea8cb2dca81e5b9646ebe3c" gracePeriod=2 Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.009359 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.099672 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kw68p" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.099741 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kw68p" event={"ID":"9a252b3e-0953-48ed-9aad-a7630867286d","Type":"ContainerDied","Data":"e45ea952c4a520e0851813b6f5564ccc7695da801ea8cb2dca81e5b9646ebe3c"} Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.099849 5024 scope.go:117] "RemoveContainer" containerID="e45ea952c4a520e0851813b6f5564ccc7695da801ea8cb2dca81e5b9646ebe3c" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.100254 5024 generic.go:334] "Generic (PLEG): container finished" podID="9a252b3e-0953-48ed-9aad-a7630867286d" containerID="e45ea952c4a520e0851813b6f5564ccc7695da801ea8cb2dca81e5b9646ebe3c" exitCode=0 Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.100344 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kw68p" event={"ID":"9a252b3e-0953-48ed-9aad-a7630867286d","Type":"ContainerDied","Data":"3e27a1127c80e2ceedbdc7e137fafff012c027516710ba56ff5a5888be381dff"} Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.126982 5024 scope.go:117] "RemoveContainer" containerID="065721aea6c4204424a929ee3db635fc329fed5f1d79a2253c21aadb40e66d25" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.150791 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt56p\" (UniqueName: \"kubernetes.io/projected/9a252b3e-0953-48ed-9aad-a7630867286d-kube-api-access-kt56p\") pod \"9a252b3e-0953-48ed-9aad-a7630867286d\" (UID: \"9a252b3e-0953-48ed-9aad-a7630867286d\") " Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.150945 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a252b3e-0953-48ed-9aad-a7630867286d-utilities\") pod \"9a252b3e-0953-48ed-9aad-a7630867286d\" (UID: \"9a252b3e-0953-48ed-9aad-a7630867286d\") " Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.152033 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a252b3e-0953-48ed-9aad-a7630867286d-utilities" (OuterVolumeSpecName: "utilities") pod "9a252b3e-0953-48ed-9aad-a7630867286d" (UID: "9a252b3e-0953-48ed-9aad-a7630867286d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.153290 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a252b3e-0953-48ed-9aad-a7630867286d-catalog-content\") pod \"9a252b3e-0953-48ed-9aad-a7630867286d\" (UID: \"9a252b3e-0953-48ed-9aad-a7630867286d\") " Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.157963 5024 scope.go:117] "RemoveContainer" containerID="6c2ddc80b8bfc17de513c0a7659dd838561d7d40949e67138673c747efc73c9d" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.158600 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a252b3e-0953-48ed-9aad-a7630867286d-kube-api-access-kt56p" (OuterVolumeSpecName: "kube-api-access-kt56p") pod "9a252b3e-0953-48ed-9aad-a7630867286d" (UID: "9a252b3e-0953-48ed-9aad-a7630867286d"). InnerVolumeSpecName "kube-api-access-kt56p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.159359 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a252b3e-0953-48ed-9aad-a7630867286d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.159412 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt56p\" (UniqueName: \"kubernetes.io/projected/9a252b3e-0953-48ed-9aad-a7630867286d-kube-api-access-kt56p\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.193923 5024 scope.go:117] "RemoveContainer" containerID="e45ea952c4a520e0851813b6f5564ccc7695da801ea8cb2dca81e5b9646ebe3c" Nov 28 17:15:17 crc kubenswrapper[5024]: E1128 17:15:17.195308 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e45ea952c4a520e0851813b6f5564ccc7695da801ea8cb2dca81e5b9646ebe3c\": container with ID starting with e45ea952c4a520e0851813b6f5564ccc7695da801ea8cb2dca81e5b9646ebe3c not found: ID does not exist" containerID="e45ea952c4a520e0851813b6f5564ccc7695da801ea8cb2dca81e5b9646ebe3c" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.195354 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e45ea952c4a520e0851813b6f5564ccc7695da801ea8cb2dca81e5b9646ebe3c"} err="failed to get container status \"e45ea952c4a520e0851813b6f5564ccc7695da801ea8cb2dca81e5b9646ebe3c\": rpc error: code = NotFound desc = could not find container \"e45ea952c4a520e0851813b6f5564ccc7695da801ea8cb2dca81e5b9646ebe3c\": container with ID starting with e45ea952c4a520e0851813b6f5564ccc7695da801ea8cb2dca81e5b9646ebe3c not found: ID does not exist" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.195382 5024 scope.go:117] "RemoveContainer" containerID="065721aea6c4204424a929ee3db635fc329fed5f1d79a2253c21aadb40e66d25" Nov 28 17:15:17 crc kubenswrapper[5024]: E1128 17:15:17.195862 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"065721aea6c4204424a929ee3db635fc329fed5f1d79a2253c21aadb40e66d25\": container with ID starting with 065721aea6c4204424a929ee3db635fc329fed5f1d79a2253c21aadb40e66d25 not found: ID does not exist" containerID="065721aea6c4204424a929ee3db635fc329fed5f1d79a2253c21aadb40e66d25" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.195891 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065721aea6c4204424a929ee3db635fc329fed5f1d79a2253c21aadb40e66d25"} err="failed to get container status \"065721aea6c4204424a929ee3db635fc329fed5f1d79a2253c21aadb40e66d25\": rpc error: code = NotFound desc = could not find container \"065721aea6c4204424a929ee3db635fc329fed5f1d79a2253c21aadb40e66d25\": container with ID starting with 065721aea6c4204424a929ee3db635fc329fed5f1d79a2253c21aadb40e66d25 not found: ID does not exist" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.195905 5024 scope.go:117] "RemoveContainer" containerID="6c2ddc80b8bfc17de513c0a7659dd838561d7d40949e67138673c747efc73c9d" Nov 28 17:15:17 crc kubenswrapper[5024]: E1128 17:15:17.196341 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c2ddc80b8bfc17de513c0a7659dd838561d7d40949e67138673c747efc73c9d\": container with ID starting with 6c2ddc80b8bfc17de513c0a7659dd838561d7d40949e67138673c747efc73c9d not found: ID does not exist" containerID="6c2ddc80b8bfc17de513c0a7659dd838561d7d40949e67138673c747efc73c9d" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.196370 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c2ddc80b8bfc17de513c0a7659dd838561d7d40949e67138673c747efc73c9d"} err="failed to get container status \"6c2ddc80b8bfc17de513c0a7659dd838561d7d40949e67138673c747efc73c9d\": rpc error: code = NotFound desc = could not find container \"6c2ddc80b8bfc17de513c0a7659dd838561d7d40949e67138673c747efc73c9d\": container with ID starting with 6c2ddc80b8bfc17de513c0a7659dd838561d7d40949e67138673c747efc73c9d not found: ID does not exist" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.212207 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a252b3e-0953-48ed-9aad-a7630867286d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a252b3e-0953-48ed-9aad-a7630867286d" (UID: "9a252b3e-0953-48ed-9aad-a7630867286d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.261165 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a252b3e-0953-48ed-9aad-a7630867286d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.434411 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kw68p"] Nov 28 17:15:17 crc kubenswrapper[5024]: I1128 17:15:17.438994 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kw68p"] Nov 28 17:15:18 crc kubenswrapper[5024]: I1128 17:15:18.511826 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a252b3e-0953-48ed-9aad-a7630867286d" path="/var/lib/kubelet/pods/9a252b3e-0953-48ed-9aad-a7630867286d/volumes" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.050643 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl"] Nov 28 17:15:31 crc kubenswrapper[5024]: E1128 17:15:31.051916 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f737711-ef09-4470-a6de-f50a6fa1fa76" containerName="extract-content" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.051934 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f737711-ef09-4470-a6de-f50a6fa1fa76" containerName="extract-content" Nov 28 17:15:31 crc kubenswrapper[5024]: E1128 17:15:31.051951 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f737711-ef09-4470-a6de-f50a6fa1fa76" containerName="extract-utilities" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.051957 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f737711-ef09-4470-a6de-f50a6fa1fa76" containerName="extract-utilities" Nov 28 17:15:31 crc kubenswrapper[5024]: E1128 17:15:31.051977 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a252b3e-0953-48ed-9aad-a7630867286d" containerName="extract-content" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.051986 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a252b3e-0953-48ed-9aad-a7630867286d" containerName="extract-content" Nov 28 17:15:31 crc kubenswrapper[5024]: E1128 17:15:31.052006 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f737711-ef09-4470-a6de-f50a6fa1fa76" containerName="registry-server" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.052013 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f737711-ef09-4470-a6de-f50a6fa1fa76" containerName="registry-server" Nov 28 17:15:31 crc kubenswrapper[5024]: E1128 17:15:31.052046 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b6b2cf-174e-47d6-8532-b7cff728a185" containerName="collect-profiles" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.052052 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b6b2cf-174e-47d6-8532-b7cff728a185" containerName="collect-profiles" Nov 28 17:15:31 crc kubenswrapper[5024]: E1128 17:15:31.052060 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a252b3e-0953-48ed-9aad-a7630867286d" containerName="extract-utilities" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.052066 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a252b3e-0953-48ed-9aad-a7630867286d" containerName="extract-utilities" Nov 28 17:15:31 crc kubenswrapper[5024]: E1128 17:15:31.052071 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a252b3e-0953-48ed-9aad-a7630867286d" containerName="registry-server" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.052077 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a252b3e-0953-48ed-9aad-a7630867286d" containerName="registry-server" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.052218 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f737711-ef09-4470-a6de-f50a6fa1fa76" containerName="registry-server" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.052231 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="78b6b2cf-174e-47d6-8532-b7cff728a185" containerName="collect-profiles" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.052240 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a252b3e-0953-48ed-9aad-a7630867286d" containerName="registry-server" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.053665 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.056091 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.063980 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl"] Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.136579 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0cdd446a-fa00-4fe8-8a53-979244f522b4-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl\" (UID: \"0cdd446a-fa00-4fe8-8a53-979244f522b4\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.136714 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0cdd446a-fa00-4fe8-8a53-979244f522b4-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl\" (UID: \"0cdd446a-fa00-4fe8-8a53-979244f522b4\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.136782 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szpj7\" (UniqueName: \"kubernetes.io/projected/0cdd446a-fa00-4fe8-8a53-979244f522b4-kube-api-access-szpj7\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl\" (UID: \"0cdd446a-fa00-4fe8-8a53-979244f522b4\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.238554 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szpj7\" (UniqueName: \"kubernetes.io/projected/0cdd446a-fa00-4fe8-8a53-979244f522b4-kube-api-access-szpj7\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl\" (UID: \"0cdd446a-fa00-4fe8-8a53-979244f522b4\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.238672 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0cdd446a-fa00-4fe8-8a53-979244f522b4-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl\" (UID: \"0cdd446a-fa00-4fe8-8a53-979244f522b4\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.238768 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0cdd446a-fa00-4fe8-8a53-979244f522b4-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl\" (UID: \"0cdd446a-fa00-4fe8-8a53-979244f522b4\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.239515 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0cdd446a-fa00-4fe8-8a53-979244f522b4-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl\" (UID: \"0cdd446a-fa00-4fe8-8a53-979244f522b4\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.239767 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0cdd446a-fa00-4fe8-8a53-979244f522b4-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl\" (UID: \"0cdd446a-fa00-4fe8-8a53-979244f522b4\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.269244 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szpj7\" (UniqueName: \"kubernetes.io/projected/0cdd446a-fa00-4fe8-8a53-979244f522b4-kube-api-access-szpj7\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl\" (UID: \"0cdd446a-fa00-4fe8-8a53-979244f522b4\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.379642 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" Nov 28 17:15:31 crc kubenswrapper[5024]: I1128 17:15:31.862901 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl"] Nov 28 17:15:32 crc kubenswrapper[5024]: I1128 17:15:32.219977 5024 generic.go:334] "Generic (PLEG): container finished" podID="0cdd446a-fa00-4fe8-8a53-979244f522b4" containerID="e165c29863a4d3de1cd0f850a933deeef5a160be2f5dac6cdd7fb8964e0bcc71" exitCode=0 Nov 28 17:15:32 crc kubenswrapper[5024]: I1128 17:15:32.220106 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" event={"ID":"0cdd446a-fa00-4fe8-8a53-979244f522b4","Type":"ContainerDied","Data":"e165c29863a4d3de1cd0f850a933deeef5a160be2f5dac6cdd7fb8964e0bcc71"} Nov 28 17:15:32 crc kubenswrapper[5024]: I1128 17:15:32.220135 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" event={"ID":"0cdd446a-fa00-4fe8-8a53-979244f522b4","Type":"ContainerStarted","Data":"a0984489bbc51c3caea180c6699ef1a1d09e8a150e4bc044a4777d503914b778"} Nov 28 17:15:34 crc kubenswrapper[5024]: I1128 17:15:34.243074 5024 generic.go:334] "Generic (PLEG): container finished" podID="0cdd446a-fa00-4fe8-8a53-979244f522b4" containerID="8868cad291de4544a3c46f1b3ef1b1fd4b8e9e90fa311144ccc907c0d879373d" exitCode=0 Nov 28 17:15:34 crc kubenswrapper[5024]: I1128 17:15:34.243175 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" event={"ID":"0cdd446a-fa00-4fe8-8a53-979244f522b4","Type":"ContainerDied","Data":"8868cad291de4544a3c46f1b3ef1b1fd4b8e9e90fa311144ccc907c0d879373d"} Nov 28 17:15:35 crc kubenswrapper[5024]: I1128 17:15:35.254454 5024 generic.go:334] "Generic (PLEG): container finished" podID="0cdd446a-fa00-4fe8-8a53-979244f522b4" containerID="56afe9f0a0be3af6bd394480bb640a4b16e5d012420b184662b33493342ad98c" exitCode=0 Nov 28 17:15:35 crc kubenswrapper[5024]: I1128 17:15:35.254499 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" event={"ID":"0cdd446a-fa00-4fe8-8a53-979244f522b4","Type":"ContainerDied","Data":"56afe9f0a0be3af6bd394480bb640a4b16e5d012420b184662b33493342ad98c"} Nov 28 17:15:36 crc kubenswrapper[5024]: I1128 17:15:36.581047 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" Nov 28 17:15:36 crc kubenswrapper[5024]: I1128 17:15:36.743040 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0cdd446a-fa00-4fe8-8a53-979244f522b4-bundle\") pod \"0cdd446a-fa00-4fe8-8a53-979244f522b4\" (UID: \"0cdd446a-fa00-4fe8-8a53-979244f522b4\") " Nov 28 17:15:36 crc kubenswrapper[5024]: I1128 17:15:36.743187 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0cdd446a-fa00-4fe8-8a53-979244f522b4-util\") pod \"0cdd446a-fa00-4fe8-8a53-979244f522b4\" (UID: \"0cdd446a-fa00-4fe8-8a53-979244f522b4\") " Nov 28 17:15:36 crc kubenswrapper[5024]: I1128 17:15:36.743340 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szpj7\" (UniqueName: \"kubernetes.io/projected/0cdd446a-fa00-4fe8-8a53-979244f522b4-kube-api-access-szpj7\") pod \"0cdd446a-fa00-4fe8-8a53-979244f522b4\" (UID: \"0cdd446a-fa00-4fe8-8a53-979244f522b4\") " Nov 28 17:15:36 crc kubenswrapper[5024]: I1128 17:15:36.743920 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cdd446a-fa00-4fe8-8a53-979244f522b4-bundle" (OuterVolumeSpecName: "bundle") pod "0cdd446a-fa00-4fe8-8a53-979244f522b4" (UID: "0cdd446a-fa00-4fe8-8a53-979244f522b4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:15:36 crc kubenswrapper[5024]: I1128 17:15:36.769931 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cdd446a-fa00-4fe8-8a53-979244f522b4-util" (OuterVolumeSpecName: "util") pod "0cdd446a-fa00-4fe8-8a53-979244f522b4" (UID: "0cdd446a-fa00-4fe8-8a53-979244f522b4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:15:36 crc kubenswrapper[5024]: I1128 17:15:36.770438 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cdd446a-fa00-4fe8-8a53-979244f522b4-kube-api-access-szpj7" (OuterVolumeSpecName: "kube-api-access-szpj7") pod "0cdd446a-fa00-4fe8-8a53-979244f522b4" (UID: "0cdd446a-fa00-4fe8-8a53-979244f522b4"). InnerVolumeSpecName "kube-api-access-szpj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:15:36 crc kubenswrapper[5024]: I1128 17:15:36.845411 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szpj7\" (UniqueName: \"kubernetes.io/projected/0cdd446a-fa00-4fe8-8a53-979244f522b4-kube-api-access-szpj7\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:36 crc kubenswrapper[5024]: I1128 17:15:36.845461 5024 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0cdd446a-fa00-4fe8-8a53-979244f522b4-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:36 crc kubenswrapper[5024]: I1128 17:15:36.845476 5024 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0cdd446a-fa00-4fe8-8a53-979244f522b4-util\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:37 crc kubenswrapper[5024]: I1128 17:15:37.271358 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" event={"ID":"0cdd446a-fa00-4fe8-8a53-979244f522b4","Type":"ContainerDied","Data":"a0984489bbc51c3caea180c6699ef1a1d09e8a150e4bc044a4777d503914b778"} Nov 28 17:15:37 crc kubenswrapper[5024]: I1128 17:15:37.271401 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0984489bbc51c3caea180c6699ef1a1d09e8a150e4bc044a4777d503914b778" Nov 28 17:15:37 crc kubenswrapper[5024]: I1128 17:15:37.271494 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl" Nov 28 17:15:39 crc kubenswrapper[5024]: I1128 17:15:39.622530 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-gkv54"] Nov 28 17:15:39 crc kubenswrapper[5024]: E1128 17:15:39.623267 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdd446a-fa00-4fe8-8a53-979244f522b4" containerName="util" Nov 28 17:15:39 crc kubenswrapper[5024]: I1128 17:15:39.623287 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdd446a-fa00-4fe8-8a53-979244f522b4" containerName="util" Nov 28 17:15:39 crc kubenswrapper[5024]: E1128 17:15:39.623305 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdd446a-fa00-4fe8-8a53-979244f522b4" containerName="pull" Nov 28 17:15:39 crc kubenswrapper[5024]: I1128 17:15:39.623314 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdd446a-fa00-4fe8-8a53-979244f522b4" containerName="pull" Nov 28 17:15:39 crc kubenswrapper[5024]: E1128 17:15:39.623345 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdd446a-fa00-4fe8-8a53-979244f522b4" containerName="extract" Nov 28 17:15:39 crc kubenswrapper[5024]: I1128 17:15:39.623354 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdd446a-fa00-4fe8-8a53-979244f522b4" containerName="extract" Nov 28 17:15:39 crc kubenswrapper[5024]: I1128 17:15:39.623504 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cdd446a-fa00-4fe8-8a53-979244f522b4" containerName="extract" Nov 28 17:15:39 crc kubenswrapper[5024]: I1128 17:15:39.624226 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-gkv54" Nov 28 17:15:39 crc kubenswrapper[5024]: I1128 17:15:39.626913 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 28 17:15:39 crc kubenswrapper[5024]: I1128 17:15:39.627178 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-59xqn" Nov 28 17:15:39 crc kubenswrapper[5024]: I1128 17:15:39.627324 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 28 17:15:39 crc kubenswrapper[5024]: I1128 17:15:39.640153 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-gkv54"] Nov 28 17:15:39 crc kubenswrapper[5024]: I1128 17:15:39.801975 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rcwb\" (UniqueName: \"kubernetes.io/projected/34dea1ac-8ada-4d52-b458-6383c62ad1d4-kube-api-access-6rcwb\") pod \"nmstate-operator-5b5b58f5c8-gkv54\" (UID: \"34dea1ac-8ada-4d52-b458-6383c62ad1d4\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-gkv54" Nov 28 17:15:39 crc kubenswrapper[5024]: I1128 17:15:39.903593 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rcwb\" (UniqueName: \"kubernetes.io/projected/34dea1ac-8ada-4d52-b458-6383c62ad1d4-kube-api-access-6rcwb\") pod \"nmstate-operator-5b5b58f5c8-gkv54\" (UID: \"34dea1ac-8ada-4d52-b458-6383c62ad1d4\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-gkv54" Nov 28 17:15:39 crc kubenswrapper[5024]: I1128 17:15:39.920378 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rcwb\" (UniqueName: \"kubernetes.io/projected/34dea1ac-8ada-4d52-b458-6383c62ad1d4-kube-api-access-6rcwb\") pod \"nmstate-operator-5b5b58f5c8-gkv54\" (UID: \"34dea1ac-8ada-4d52-b458-6383c62ad1d4\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-gkv54" Nov 28 17:15:39 crc kubenswrapper[5024]: I1128 17:15:39.957409 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-gkv54" Nov 28 17:15:40 crc kubenswrapper[5024]: I1128 17:15:40.425733 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-gkv54"] Nov 28 17:15:41 crc kubenswrapper[5024]: I1128 17:15:41.299206 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-gkv54" event={"ID":"34dea1ac-8ada-4d52-b458-6383c62ad1d4","Type":"ContainerStarted","Data":"5251f52736f1e8b846ab3815d5620d1561775fef49bdaeeddf4b33488a22fb59"} Nov 28 17:15:43 crc kubenswrapper[5024]: I1128 17:15:43.314055 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-gkv54" event={"ID":"34dea1ac-8ada-4d52-b458-6383c62ad1d4","Type":"ContainerStarted","Data":"7c2b03075a00dc3e01f60635e69fefb47d464735ea58555c241b913a526adcd3"} Nov 28 17:15:43 crc kubenswrapper[5024]: I1128 17:15:43.334189 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-gkv54" podStartSLOduration=2.184461824 podStartE2EDuration="4.33416884s" podCreationTimestamp="2025-11-28 17:15:39 +0000 UTC" firstStartedPulling="2025-11-28 17:15:40.429545846 +0000 UTC m=+1042.478466741" lastFinishedPulling="2025-11-28 17:15:42.579252862 +0000 UTC m=+1044.628173757" observedRunningTime="2025-11-28 17:15:43.330623227 +0000 UTC m=+1045.379544142" watchObservedRunningTime="2025-11-28 17:15:43.33416884 +0000 UTC m=+1045.383089745" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.313426 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8"] Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.314733 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.318441 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.318543 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-sdvt8" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.324044 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-spqhp"] Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.326326 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-spqhp" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.340761 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-spqhp"] Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.368286 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8"] Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.398078 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8gxnt"] Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.399170 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.495074 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z7zv\" (UniqueName: \"kubernetes.io/projected/bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5-kube-api-access-9z7zv\") pod \"nmstate-webhook-5f6d4c5ccb-pw8c8\" (UID: \"bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.495156 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d996313c-5bc6-4930-a202-ca55774866c0-ovs-socket\") pod \"nmstate-handler-8gxnt\" (UID: \"d996313c-5bc6-4930-a202-ca55774866c0\") " pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.495190 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fd59\" (UniqueName: \"kubernetes.io/projected/d996313c-5bc6-4930-a202-ca55774866c0-kube-api-access-8fd59\") pod \"nmstate-handler-8gxnt\" (UID: \"d996313c-5bc6-4930-a202-ca55774866c0\") " pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.495394 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d996313c-5bc6-4930-a202-ca55774866c0-dbus-socket\") pod \"nmstate-handler-8gxnt\" (UID: \"d996313c-5bc6-4930-a202-ca55774866c0\") " pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.495573 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbmv\" (UniqueName: \"kubernetes.io/projected/570a7ddb-1a00-4e87-8db0-32760d8455d9-kube-api-access-qfbmv\") pod \"nmstate-metrics-7f946cbc9-spqhp\" (UID: \"570a7ddb-1a00-4e87-8db0-32760d8455d9\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-spqhp" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.495597 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d996313c-5bc6-4930-a202-ca55774866c0-nmstate-lock\") pod \"nmstate-handler-8gxnt\" (UID: \"d996313c-5bc6-4930-a202-ca55774866c0\") " pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.495630 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-pw8c8\" (UID: \"bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.522042 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz"] Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.522974 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.527514 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-bl2vw" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.527850 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.532610 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.542421 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz"] Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.596979 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfbmv\" (UniqueName: \"kubernetes.io/projected/570a7ddb-1a00-4e87-8db0-32760d8455d9-kube-api-access-qfbmv\") pod \"nmstate-metrics-7f946cbc9-spqhp\" (UID: \"570a7ddb-1a00-4e87-8db0-32760d8455d9\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-spqhp" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.597054 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d996313c-5bc6-4930-a202-ca55774866c0-nmstate-lock\") pod \"nmstate-handler-8gxnt\" (UID: \"d996313c-5bc6-4930-a202-ca55774866c0\") " pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.597090 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-pw8c8\" (UID: \"bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.597192 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d996313c-5bc6-4930-a202-ca55774866c0-nmstate-lock\") pod \"nmstate-handler-8gxnt\" (UID: \"d996313c-5bc6-4930-a202-ca55774866c0\") " pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:44 crc kubenswrapper[5024]: E1128 17:15:44.597332 5024 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 28 17:15:44 crc kubenswrapper[5024]: E1128 17:15:44.597393 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5-tls-key-pair podName:bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:45.097372185 +0000 UTC m=+1047.146293100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-pw8c8" (UID: "bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5") : secret "openshift-nmstate-webhook" not found Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.597205 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z7zv\" (UniqueName: \"kubernetes.io/projected/bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5-kube-api-access-9z7zv\") pod \"nmstate-webhook-5f6d4c5ccb-pw8c8\" (UID: \"bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.597595 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d996313c-5bc6-4930-a202-ca55774866c0-ovs-socket\") pod \"nmstate-handler-8gxnt\" (UID: \"d996313c-5bc6-4930-a202-ca55774866c0\") " pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.597624 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fd59\" (UniqueName: \"kubernetes.io/projected/d996313c-5bc6-4930-a202-ca55774866c0-kube-api-access-8fd59\") pod \"nmstate-handler-8gxnt\" (UID: \"d996313c-5bc6-4930-a202-ca55774866c0\") " pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.597684 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d996313c-5bc6-4930-a202-ca55774866c0-dbus-socket\") pod \"nmstate-handler-8gxnt\" (UID: \"d996313c-5bc6-4930-a202-ca55774866c0\") " pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.597718 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d996313c-5bc6-4930-a202-ca55774866c0-ovs-socket\") pod \"nmstate-handler-8gxnt\" (UID: \"d996313c-5bc6-4930-a202-ca55774866c0\") " pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.598097 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d996313c-5bc6-4930-a202-ca55774866c0-dbus-socket\") pod \"nmstate-handler-8gxnt\" (UID: \"d996313c-5bc6-4930-a202-ca55774866c0\") " pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.617908 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfbmv\" (UniqueName: \"kubernetes.io/projected/570a7ddb-1a00-4e87-8db0-32760d8455d9-kube-api-access-qfbmv\") pod \"nmstate-metrics-7f946cbc9-spqhp\" (UID: \"570a7ddb-1a00-4e87-8db0-32760d8455d9\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-spqhp" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.621877 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fd59\" (UniqueName: \"kubernetes.io/projected/d996313c-5bc6-4930-a202-ca55774866c0-kube-api-access-8fd59\") pod \"nmstate-handler-8gxnt\" (UID: \"d996313c-5bc6-4930-a202-ca55774866c0\") " pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.623759 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z7zv\" (UniqueName: \"kubernetes.io/projected/bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5-kube-api-access-9z7zv\") pod \"nmstate-webhook-5f6d4c5ccb-pw8c8\" (UID: \"bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.666910 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-spqhp" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.699875 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fda0f5a7-9a36-4090-8a0e-f3c635396eff-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-cqwwz\" (UID: \"fda0f5a7-9a36-4090-8a0e-f3c635396eff\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.700388 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fda0f5a7-9a36-4090-8a0e-f3c635396eff-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-cqwwz\" (UID: \"fda0f5a7-9a36-4090-8a0e-f3c635396eff\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.700436 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttn28\" (UniqueName: \"kubernetes.io/projected/fda0f5a7-9a36-4090-8a0e-f3c635396eff-kube-api-access-ttn28\") pod \"nmstate-console-plugin-7fbb5f6569-cqwwz\" (UID: \"fda0f5a7-9a36-4090-8a0e-f3c635396eff\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.729453 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.740971 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-78fdf7cd4f-99mvs"] Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.751674 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.759011 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78fdf7cd4f-99mvs"] Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.801546 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fda0f5a7-9a36-4090-8a0e-f3c635396eff-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-cqwwz\" (UID: \"fda0f5a7-9a36-4090-8a0e-f3c635396eff\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.801606 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fda0f5a7-9a36-4090-8a0e-f3c635396eff-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-cqwwz\" (UID: \"fda0f5a7-9a36-4090-8a0e-f3c635396eff\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.801656 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttn28\" (UniqueName: \"kubernetes.io/projected/fda0f5a7-9a36-4090-8a0e-f3c635396eff-kube-api-access-ttn28\") pod \"nmstate-console-plugin-7fbb5f6569-cqwwz\" (UID: \"fda0f5a7-9a36-4090-8a0e-f3c635396eff\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.802870 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fda0f5a7-9a36-4090-8a0e-f3c635396eff-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-cqwwz\" (UID: \"fda0f5a7-9a36-4090-8a0e-f3c635396eff\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.810485 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fda0f5a7-9a36-4090-8a0e-f3c635396eff-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-cqwwz\" (UID: \"fda0f5a7-9a36-4090-8a0e-f3c635396eff\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.824236 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttn28\" (UniqueName: \"kubernetes.io/projected/fda0f5a7-9a36-4090-8a0e-f3c635396eff-kube-api-access-ttn28\") pod \"nmstate-console-plugin-7fbb5f6569-cqwwz\" (UID: \"fda0f5a7-9a36-4090-8a0e-f3c635396eff\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.847520 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.903941 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-serving-cert\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.904595 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klbwh\" (UniqueName: \"kubernetes.io/projected/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-kube-api-access-klbwh\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.904625 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-service-ca\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.904650 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-config\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.904680 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-oauth-config\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.904706 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-trusted-ca-bundle\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:44 crc kubenswrapper[5024]: I1128 17:15:44.904726 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-oauth-serving-cert\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.007139 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-oauth-config\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.007239 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-trusted-ca-bundle\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.007276 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-oauth-serving-cert\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.007340 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-serving-cert\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.007440 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klbwh\" (UniqueName: \"kubernetes.io/projected/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-kube-api-access-klbwh\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.007473 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-service-ca\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.007508 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-config\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.008483 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-config\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.010300 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-oauth-serving-cert\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.010940 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-service-ca\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.011793 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-trusted-ca-bundle\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.016372 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-oauth-config\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.016503 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-serving-cert\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.049898 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klbwh\" (UniqueName: \"kubernetes.io/projected/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-kube-api-access-klbwh\") pod \"console-78fdf7cd4f-99mvs\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.102238 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.108855 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-pw8c8\" (UID: \"bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.115698 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-pw8c8\" (UID: \"bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.241266 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8" Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.266249 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-spqhp"] Nov 28 17:15:45 crc kubenswrapper[5024]: W1128 17:15:45.278312 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod570a7ddb_1a00_4e87_8db0_32760d8455d9.slice/crio-b1cf6d1d2d7dc1de650e6728cf70f3fe296d51810e66671e991e1a3abfb9a688 WatchSource:0}: Error finding container b1cf6d1d2d7dc1de650e6728cf70f3fe296d51810e66671e991e1a3abfb9a688: Status 404 returned error can't find the container with id b1cf6d1d2d7dc1de650e6728cf70f3fe296d51810e66671e991e1a3abfb9a688 Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.340457 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-spqhp" event={"ID":"570a7ddb-1a00-4e87-8db0-32760d8455d9","Type":"ContainerStarted","Data":"b1cf6d1d2d7dc1de650e6728cf70f3fe296d51810e66671e991e1a3abfb9a688"} Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.345844 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8gxnt" event={"ID":"d996313c-5bc6-4930-a202-ca55774866c0","Type":"ContainerStarted","Data":"d4ec5b3f2cb359dde6d9b7ab0dc75811f43021e971491367cb15fbcf0c7b6c77"} Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.446712 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz"] Nov 28 17:15:45 crc kubenswrapper[5024]: W1128 17:15:45.461299 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfda0f5a7_9a36_4090_8a0e_f3c635396eff.slice/crio-a0e9e083a48eb55802ce692e6a4d82c6b6cb6101278a673b04092afd6562a240 WatchSource:0}: Error finding container a0e9e083a48eb55802ce692e6a4d82c6b6cb6101278a673b04092afd6562a240: Status 404 returned error can't find the container with id a0e9e083a48eb55802ce692e6a4d82c6b6cb6101278a673b04092afd6562a240 Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.577191 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78fdf7cd4f-99mvs"] Nov 28 17:15:45 crc kubenswrapper[5024]: I1128 17:15:45.697652 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8"] Nov 28 17:15:45 crc kubenswrapper[5024]: W1128 17:15:45.718615 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd456cf2_7c4f_4ba6_9be7_85d96c86e3a5.slice/crio-6bdd700d70355acdb25f2003d082102ba1bac79a10a180e2a186f81c50794a99 WatchSource:0}: Error finding container 6bdd700d70355acdb25f2003d082102ba1bac79a10a180e2a186f81c50794a99: Status 404 returned error can't find the container with id 6bdd700d70355acdb25f2003d082102ba1bac79a10a180e2a186f81c50794a99 Nov 28 17:15:46 crc kubenswrapper[5024]: I1128 17:15:46.356695 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78fdf7cd4f-99mvs" event={"ID":"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b","Type":"ContainerStarted","Data":"f01fb5ee8b5feb089b7dd26a3d34261a5c738f1e580b0e144d42a6555eed1493"} Nov 28 17:15:46 crc kubenswrapper[5024]: I1128 17:15:46.357053 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78fdf7cd4f-99mvs" event={"ID":"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b","Type":"ContainerStarted","Data":"d01b9e46d5c7f2172f90a3af5b754bca6c47e153c88d180b5c167d880178a0df"} Nov 28 17:15:46 crc kubenswrapper[5024]: I1128 17:15:46.358466 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8" event={"ID":"bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5","Type":"ContainerStarted","Data":"6bdd700d70355acdb25f2003d082102ba1bac79a10a180e2a186f81c50794a99"} Nov 28 17:15:46 crc kubenswrapper[5024]: I1128 17:15:46.359533 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz" event={"ID":"fda0f5a7-9a36-4090-8a0e-f3c635396eff","Type":"ContainerStarted","Data":"a0e9e083a48eb55802ce692e6a4d82c6b6cb6101278a673b04092afd6562a240"} Nov 28 17:15:46 crc kubenswrapper[5024]: I1128 17:15:46.386317 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-78fdf7cd4f-99mvs" podStartSLOduration=2.3862988339999998 podStartE2EDuration="2.386298834s" podCreationTimestamp="2025-11-28 17:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:15:46.377930653 +0000 UTC m=+1048.426851558" watchObservedRunningTime="2025-11-28 17:15:46.386298834 +0000 UTC m=+1048.435219749" Nov 28 17:15:48 crc kubenswrapper[5024]: I1128 17:15:48.398812 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-spqhp" event={"ID":"570a7ddb-1a00-4e87-8db0-32760d8455d9","Type":"ContainerStarted","Data":"3f02221a2153084d6c4094ea851ec2eebe0b521bb65217db7f29b7d6f13db8e3"} Nov 28 17:15:48 crc kubenswrapper[5024]: I1128 17:15:48.409182 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8" Nov 28 17:15:48 crc kubenswrapper[5024]: I1128 17:15:48.409260 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8" event={"ID":"bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5","Type":"ContainerStarted","Data":"460ef182489bdb8f2dcac345debd6c414d01b2a52afc2768ecb3b2d384897df5"} Nov 28 17:15:48 crc kubenswrapper[5024]: I1128 17:15:48.413690 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8gxnt" event={"ID":"d996313c-5bc6-4930-a202-ca55774866c0","Type":"ContainerStarted","Data":"a1d5782627d5b7e1a08c678d6a1777e21a60b9c6a4091513dff5997900d00f1f"} Nov 28 17:15:48 crc kubenswrapper[5024]: I1128 17:15:48.413943 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:48 crc kubenswrapper[5024]: I1128 17:15:48.443570 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8" podStartSLOduration=2.592448104 podStartE2EDuration="4.443544875s" podCreationTimestamp="2025-11-28 17:15:44 +0000 UTC" firstStartedPulling="2025-11-28 17:15:45.72483319 +0000 UTC m=+1047.773754095" lastFinishedPulling="2025-11-28 17:15:47.575929961 +0000 UTC m=+1049.624850866" observedRunningTime="2025-11-28 17:15:48.432625241 +0000 UTC m=+1050.481546146" watchObservedRunningTime="2025-11-28 17:15:48.443544875 +0000 UTC m=+1050.492465780" Nov 28 17:15:48 crc kubenswrapper[5024]: I1128 17:15:48.454535 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8gxnt" podStartSLOduration=1.723660035 podStartE2EDuration="4.454514082s" podCreationTimestamp="2025-11-28 17:15:44 +0000 UTC" firstStartedPulling="2025-11-28 17:15:44.813143604 +0000 UTC m=+1046.862064509" lastFinishedPulling="2025-11-28 17:15:47.543997651 +0000 UTC m=+1049.592918556" observedRunningTime="2025-11-28 17:15:48.446169221 +0000 UTC m=+1050.495090126" watchObservedRunningTime="2025-11-28 17:15:48.454514082 +0000 UTC m=+1050.503434987" Nov 28 17:15:49 crc kubenswrapper[5024]: I1128 17:15:49.424208 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz" event={"ID":"fda0f5a7-9a36-4090-8a0e-f3c635396eff","Type":"ContainerStarted","Data":"dfab620f26f0f3beed7dbe5b741ff64ce38fc000d679df262a0b8db496341762"} Nov 28 17:15:49 crc kubenswrapper[5024]: I1128 17:15:49.444810 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-cqwwz" podStartSLOduration=1.9482991809999999 podStartE2EDuration="5.444793233s" podCreationTimestamp="2025-11-28 17:15:44 +0000 UTC" firstStartedPulling="2025-11-28 17:15:45.46321448 +0000 UTC m=+1047.512135385" lastFinishedPulling="2025-11-28 17:15:48.959708532 +0000 UTC m=+1051.008629437" observedRunningTime="2025-11-28 17:15:49.440340894 +0000 UTC m=+1051.489261799" watchObservedRunningTime="2025-11-28 17:15:49.444793233 +0000 UTC m=+1051.493714138" Nov 28 17:15:51 crc kubenswrapper[5024]: I1128 17:15:51.488309 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-spqhp" event={"ID":"570a7ddb-1a00-4e87-8db0-32760d8455d9","Type":"ContainerStarted","Data":"ea9551b1f1d6cf1dcbc0bd2cc2430ff22661e532edb06b3b4812d35e2f171be1"} Nov 28 17:15:51 crc kubenswrapper[5024]: I1128 17:15:51.510230 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-spqhp" podStartSLOduration=2.121206404 podStartE2EDuration="7.510200539s" podCreationTimestamp="2025-11-28 17:15:44 +0000 UTC" firstStartedPulling="2025-11-28 17:15:45.280454763 +0000 UTC m=+1047.329375668" lastFinishedPulling="2025-11-28 17:15:50.669448888 +0000 UTC m=+1052.718369803" observedRunningTime="2025-11-28 17:15:51.506002108 +0000 UTC m=+1053.554923023" watchObservedRunningTime="2025-11-28 17:15:51.510200539 +0000 UTC m=+1053.559121484" Nov 28 17:15:54 crc kubenswrapper[5024]: I1128 17:15:54.780556 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8gxnt" Nov 28 17:15:55 crc kubenswrapper[5024]: I1128 17:15:55.103538 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:55 crc kubenswrapper[5024]: I1128 17:15:55.103591 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:55 crc kubenswrapper[5024]: I1128 17:15:55.107558 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:55 crc kubenswrapper[5024]: I1128 17:15:55.517614 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:15:55 crc kubenswrapper[5024]: I1128 17:15:55.589166 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-684b966679-864s4"] Nov 28 17:16:05 crc kubenswrapper[5024]: I1128 17:16:05.247642 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pw8c8" Nov 28 17:16:20 crc kubenswrapper[5024]: I1128 17:16:20.641546 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-684b966679-864s4" podUID="b95c512b-6c39-4c81-b89f-c76cfd89a185" containerName="console" containerID="cri-o://edaf95e01854863f1cfaed6ba7c1d08edec9bec805c4b7501e8663e0c68337c4" gracePeriod=15 Nov 28 17:16:20 crc kubenswrapper[5024]: I1128 17:16:20.849679 5024 patch_prober.go:28] interesting pod/console-684b966679-864s4 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/health\": dial tcp 10.217.0.80:8443: connect: connection refused" start-of-body= Nov 28 17:16:20 crc kubenswrapper[5024]: I1128 17:16:20.850079 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-684b966679-864s4" podUID="b95c512b-6c39-4c81-b89f-c76cfd89a185" containerName="console" probeResult="failure" output="Get \"https://10.217.0.80:8443/health\": dial tcp 10.217.0.80:8443: connect: connection refused" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.131351 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-684b966679-864s4_b95c512b-6c39-4c81-b89f-c76cfd89a185/console/0.log" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.131416 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-684b966679-864s4" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.209567 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-trusted-ca-bundle\") pod \"b95c512b-6c39-4c81-b89f-c76cfd89a185\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.209622 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-oauth-serving-cert\") pod \"b95c512b-6c39-4c81-b89f-c76cfd89a185\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.209667 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-serving-cert\") pod \"b95c512b-6c39-4c81-b89f-c76cfd89a185\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.209760 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-oauth-config\") pod \"b95c512b-6c39-4c81-b89f-c76cfd89a185\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.209810 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-service-ca\") pod \"b95c512b-6c39-4c81-b89f-c76cfd89a185\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.209923 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8rsk\" (UniqueName: \"kubernetes.io/projected/b95c512b-6c39-4c81-b89f-c76cfd89a185-kube-api-access-x8rsk\") pod \"b95c512b-6c39-4c81-b89f-c76cfd89a185\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.209969 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-config\") pod \"b95c512b-6c39-4c81-b89f-c76cfd89a185\" (UID: \"b95c512b-6c39-4c81-b89f-c76cfd89a185\") " Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.211076 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b95c512b-6c39-4c81-b89f-c76cfd89a185" (UID: "b95c512b-6c39-4c81-b89f-c76cfd89a185"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.211090 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-service-ca" (OuterVolumeSpecName: "service-ca") pod "b95c512b-6c39-4c81-b89f-c76cfd89a185" (UID: "b95c512b-6c39-4c81-b89f-c76cfd89a185"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.211282 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-config" (OuterVolumeSpecName: "console-config") pod "b95c512b-6c39-4c81-b89f-c76cfd89a185" (UID: "b95c512b-6c39-4c81-b89f-c76cfd89a185"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.211535 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b95c512b-6c39-4c81-b89f-c76cfd89a185" (UID: "b95c512b-6c39-4c81-b89f-c76cfd89a185"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.217565 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b95c512b-6c39-4c81-b89f-c76cfd89a185" (UID: "b95c512b-6c39-4c81-b89f-c76cfd89a185"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.217944 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b95c512b-6c39-4c81-b89f-c76cfd89a185-kube-api-access-x8rsk" (OuterVolumeSpecName: "kube-api-access-x8rsk") pod "b95c512b-6c39-4c81-b89f-c76cfd89a185" (UID: "b95c512b-6c39-4c81-b89f-c76cfd89a185"). InnerVolumeSpecName "kube-api-access-x8rsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.218206 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b95c512b-6c39-4c81-b89f-c76cfd89a185" (UID: "b95c512b-6c39-4c81-b89f-c76cfd89a185"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.311956 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.312004 5024 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.312028 5024 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.312041 5024 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.312053 5024 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.312064 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8rsk\" (UniqueName: \"kubernetes.io/projected/b95c512b-6c39-4c81-b89f-c76cfd89a185-kube-api-access-x8rsk\") on node \"crc\" DevicePath \"\"" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.312076 5024 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b95c512b-6c39-4c81-b89f-c76cfd89a185-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.736572 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-684b966679-864s4_b95c512b-6c39-4c81-b89f-c76cfd89a185/console/0.log" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.737075 5024 generic.go:334] "Generic (PLEG): container finished" podID="b95c512b-6c39-4c81-b89f-c76cfd89a185" containerID="edaf95e01854863f1cfaed6ba7c1d08edec9bec805c4b7501e8663e0c68337c4" exitCode=2 Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.737123 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-684b966679-864s4" event={"ID":"b95c512b-6c39-4c81-b89f-c76cfd89a185","Type":"ContainerDied","Data":"edaf95e01854863f1cfaed6ba7c1d08edec9bec805c4b7501e8663e0c68337c4"} Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.737158 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-684b966679-864s4" event={"ID":"b95c512b-6c39-4c81-b89f-c76cfd89a185","Type":"ContainerDied","Data":"9a0a5685d44563799666812ec21596c18f5de3e131987b32aaf09ecd08e632d3"} Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.737155 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-684b966679-864s4" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.737174 5024 scope.go:117] "RemoveContainer" containerID="edaf95e01854863f1cfaed6ba7c1d08edec9bec805c4b7501e8663e0c68337c4" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.767740 5024 scope.go:117] "RemoveContainer" containerID="edaf95e01854863f1cfaed6ba7c1d08edec9bec805c4b7501e8663e0c68337c4" Nov 28 17:16:21 crc kubenswrapper[5024]: E1128 17:16:21.771346 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edaf95e01854863f1cfaed6ba7c1d08edec9bec805c4b7501e8663e0c68337c4\": container with ID starting with edaf95e01854863f1cfaed6ba7c1d08edec9bec805c4b7501e8663e0c68337c4 not found: ID does not exist" containerID="edaf95e01854863f1cfaed6ba7c1d08edec9bec805c4b7501e8663e0c68337c4" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.773979 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edaf95e01854863f1cfaed6ba7c1d08edec9bec805c4b7501e8663e0c68337c4"} err="failed to get container status \"edaf95e01854863f1cfaed6ba7c1d08edec9bec805c4b7501e8663e0c68337c4\": rpc error: code = NotFound desc = could not find container \"edaf95e01854863f1cfaed6ba7c1d08edec9bec805c4b7501e8663e0c68337c4\": container with ID starting with edaf95e01854863f1cfaed6ba7c1d08edec9bec805c4b7501e8663e0c68337c4 not found: ID does not exist" Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.782949 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-684b966679-864s4"] Nov 28 17:16:21 crc kubenswrapper[5024]: I1128 17:16:21.787407 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-684b966679-864s4"] Nov 28 17:16:22 crc kubenswrapper[5024]: I1128 17:16:22.510101 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b95c512b-6c39-4c81-b89f-c76cfd89a185" path="/var/lib/kubelet/pods/b95c512b-6c39-4c81-b89f-c76cfd89a185/volumes" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.235837 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj"] Nov 28 17:16:23 crc kubenswrapper[5024]: E1128 17:16:23.237523 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95c512b-6c39-4c81-b89f-c76cfd89a185" containerName="console" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.237625 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95c512b-6c39-4c81-b89f-c76cfd89a185" containerName="console" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.237909 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b95c512b-6c39-4c81-b89f-c76cfd89a185" containerName="console" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.240584 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.267505 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.269301 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj"] Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.363853 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0814b582-694e-41f0-bcd0-04311a2471d2-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj\" (UID: \"0814b582-694e-41f0-bcd0-04311a2471d2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.363976 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9xlv\" (UniqueName: \"kubernetes.io/projected/0814b582-694e-41f0-bcd0-04311a2471d2-kube-api-access-l9xlv\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj\" (UID: \"0814b582-694e-41f0-bcd0-04311a2471d2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.364008 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0814b582-694e-41f0-bcd0-04311a2471d2-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj\" (UID: \"0814b582-694e-41f0-bcd0-04311a2471d2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.466155 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0814b582-694e-41f0-bcd0-04311a2471d2-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj\" (UID: \"0814b582-694e-41f0-bcd0-04311a2471d2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.466268 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9xlv\" (UniqueName: \"kubernetes.io/projected/0814b582-694e-41f0-bcd0-04311a2471d2-kube-api-access-l9xlv\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj\" (UID: \"0814b582-694e-41f0-bcd0-04311a2471d2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.466295 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0814b582-694e-41f0-bcd0-04311a2471d2-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj\" (UID: \"0814b582-694e-41f0-bcd0-04311a2471d2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.466651 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0814b582-694e-41f0-bcd0-04311a2471d2-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj\" (UID: \"0814b582-694e-41f0-bcd0-04311a2471d2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.466727 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0814b582-694e-41f0-bcd0-04311a2471d2-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj\" (UID: \"0814b582-694e-41f0-bcd0-04311a2471d2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.488256 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9xlv\" (UniqueName: \"kubernetes.io/projected/0814b582-694e-41f0-bcd0-04311a2471d2-kube-api-access-l9xlv\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj\" (UID: \"0814b582-694e-41f0-bcd0-04311a2471d2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" Nov 28 17:16:23 crc kubenswrapper[5024]: I1128 17:16:23.591316 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" Nov 28 17:16:24 crc kubenswrapper[5024]: I1128 17:16:24.003996 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj"] Nov 28 17:16:24 crc kubenswrapper[5024]: I1128 17:16:24.760164 5024 generic.go:334] "Generic (PLEG): container finished" podID="0814b582-694e-41f0-bcd0-04311a2471d2" containerID="da55e6d563e785e92c25509267ccc2718ae1cca8a3f49849238a6edd540500e8" exitCode=0 Nov 28 17:16:24 crc kubenswrapper[5024]: I1128 17:16:24.760230 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" event={"ID":"0814b582-694e-41f0-bcd0-04311a2471d2","Type":"ContainerDied","Data":"da55e6d563e785e92c25509267ccc2718ae1cca8a3f49849238a6edd540500e8"} Nov 28 17:16:24 crc kubenswrapper[5024]: I1128 17:16:24.761494 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" event={"ID":"0814b582-694e-41f0-bcd0-04311a2471d2","Type":"ContainerStarted","Data":"89afedcbfd30d3c8f26939d26defc369be09603727e6ab5e043c01265818cc0f"} Nov 28 17:16:24 crc kubenswrapper[5024]: I1128 17:16:24.762217 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:16:26 crc kubenswrapper[5024]: I1128 17:16:26.782422 5024 generic.go:334] "Generic (PLEG): container finished" podID="0814b582-694e-41f0-bcd0-04311a2471d2" containerID="a8495da2bef1202acf0387fb284b11b0ddf40296dbda48efdd3211872eacb7de" exitCode=0 Nov 28 17:16:26 crc kubenswrapper[5024]: I1128 17:16:26.782512 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" event={"ID":"0814b582-694e-41f0-bcd0-04311a2471d2","Type":"ContainerDied","Data":"a8495da2bef1202acf0387fb284b11b0ddf40296dbda48efdd3211872eacb7de"} Nov 28 17:16:27 crc kubenswrapper[5024]: I1128 17:16:27.790629 5024 generic.go:334] "Generic (PLEG): container finished" podID="0814b582-694e-41f0-bcd0-04311a2471d2" containerID="5d0c4f7b2939dc1c68678c41f6862309b55f2bd89320567a26bce95b66390b1f" exitCode=0 Nov 28 17:16:27 crc kubenswrapper[5024]: I1128 17:16:27.790670 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" event={"ID":"0814b582-694e-41f0-bcd0-04311a2471d2","Type":"ContainerDied","Data":"5d0c4f7b2939dc1c68678c41f6862309b55f2bd89320567a26bce95b66390b1f"} Nov 28 17:16:29 crc kubenswrapper[5024]: I1128 17:16:29.061940 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" Nov 28 17:16:29 crc kubenswrapper[5024]: I1128 17:16:29.166888 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0814b582-694e-41f0-bcd0-04311a2471d2-bundle\") pod \"0814b582-694e-41f0-bcd0-04311a2471d2\" (UID: \"0814b582-694e-41f0-bcd0-04311a2471d2\") " Nov 28 17:16:29 crc kubenswrapper[5024]: I1128 17:16:29.166976 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9xlv\" (UniqueName: \"kubernetes.io/projected/0814b582-694e-41f0-bcd0-04311a2471d2-kube-api-access-l9xlv\") pod \"0814b582-694e-41f0-bcd0-04311a2471d2\" (UID: \"0814b582-694e-41f0-bcd0-04311a2471d2\") " Nov 28 17:16:29 crc kubenswrapper[5024]: I1128 17:16:29.167372 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0814b582-694e-41f0-bcd0-04311a2471d2-util\") pod \"0814b582-694e-41f0-bcd0-04311a2471d2\" (UID: \"0814b582-694e-41f0-bcd0-04311a2471d2\") " Nov 28 17:16:29 crc kubenswrapper[5024]: I1128 17:16:29.168592 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0814b582-694e-41f0-bcd0-04311a2471d2-bundle" (OuterVolumeSpecName: "bundle") pod "0814b582-694e-41f0-bcd0-04311a2471d2" (UID: "0814b582-694e-41f0-bcd0-04311a2471d2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:16:29 crc kubenswrapper[5024]: I1128 17:16:29.178580 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0814b582-694e-41f0-bcd0-04311a2471d2-kube-api-access-l9xlv" (OuterVolumeSpecName: "kube-api-access-l9xlv") pod "0814b582-694e-41f0-bcd0-04311a2471d2" (UID: "0814b582-694e-41f0-bcd0-04311a2471d2"). InnerVolumeSpecName "kube-api-access-l9xlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:16:29 crc kubenswrapper[5024]: I1128 17:16:29.190605 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0814b582-694e-41f0-bcd0-04311a2471d2-util" (OuterVolumeSpecName: "util") pod "0814b582-694e-41f0-bcd0-04311a2471d2" (UID: "0814b582-694e-41f0-bcd0-04311a2471d2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:16:29 crc kubenswrapper[5024]: I1128 17:16:29.269793 5024 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0814b582-694e-41f0-bcd0-04311a2471d2-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:16:29 crc kubenswrapper[5024]: I1128 17:16:29.269831 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9xlv\" (UniqueName: \"kubernetes.io/projected/0814b582-694e-41f0-bcd0-04311a2471d2-kube-api-access-l9xlv\") on node \"crc\" DevicePath \"\"" Nov 28 17:16:29 crc kubenswrapper[5024]: I1128 17:16:29.269843 5024 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0814b582-694e-41f0-bcd0-04311a2471d2-util\") on node \"crc\" DevicePath \"\"" Nov 28 17:16:29 crc kubenswrapper[5024]: I1128 17:16:29.806693 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" event={"ID":"0814b582-694e-41f0-bcd0-04311a2471d2","Type":"ContainerDied","Data":"89afedcbfd30d3c8f26939d26defc369be09603727e6ab5e043c01265818cc0f"} Nov 28 17:16:29 crc kubenswrapper[5024]: I1128 17:16:29.806739 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89afedcbfd30d3c8f26939d26defc369be09603727e6ab5e043c01265818cc0f" Nov 28 17:16:29 crc kubenswrapper[5024]: I1128 17:16:29.806749 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.072417 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8"] Nov 28 17:16:39 crc kubenswrapper[5024]: E1128 17:16:39.073320 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0814b582-694e-41f0-bcd0-04311a2471d2" containerName="pull" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.073333 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0814b582-694e-41f0-bcd0-04311a2471d2" containerName="pull" Nov 28 17:16:39 crc kubenswrapper[5024]: E1128 17:16:39.073380 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0814b582-694e-41f0-bcd0-04311a2471d2" containerName="extract" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.073386 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0814b582-694e-41f0-bcd0-04311a2471d2" containerName="extract" Nov 28 17:16:39 crc kubenswrapper[5024]: E1128 17:16:39.073396 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0814b582-694e-41f0-bcd0-04311a2471d2" containerName="util" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.073402 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0814b582-694e-41f0-bcd0-04311a2471d2" containerName="util" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.073596 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0814b582-694e-41f0-bcd0-04311a2471d2" containerName="extract" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.074182 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.080585 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-bzvts" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.080683 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.080814 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.080922 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.081064 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.126614 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8"] Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.159584 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/94c85c6d-1f63-4a43-96a5-850aae6a27cf-apiservice-cert\") pod \"metallb-operator-controller-manager-78fcc557d5-tzzx8\" (UID: \"94c85c6d-1f63-4a43-96a5-850aae6a27cf\") " pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.159891 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/94c85c6d-1f63-4a43-96a5-850aae6a27cf-webhook-cert\") pod \"metallb-operator-controller-manager-78fcc557d5-tzzx8\" (UID: \"94c85c6d-1f63-4a43-96a5-850aae6a27cf\") " pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.160240 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z76dv\" (UniqueName: \"kubernetes.io/projected/94c85c6d-1f63-4a43-96a5-850aae6a27cf-kube-api-access-z76dv\") pod \"metallb-operator-controller-manager-78fcc557d5-tzzx8\" (UID: \"94c85c6d-1f63-4a43-96a5-850aae6a27cf\") " pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.261567 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/94c85c6d-1f63-4a43-96a5-850aae6a27cf-apiservice-cert\") pod \"metallb-operator-controller-manager-78fcc557d5-tzzx8\" (UID: \"94c85c6d-1f63-4a43-96a5-850aae6a27cf\") " pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.261674 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/94c85c6d-1f63-4a43-96a5-850aae6a27cf-webhook-cert\") pod \"metallb-operator-controller-manager-78fcc557d5-tzzx8\" (UID: \"94c85c6d-1f63-4a43-96a5-850aae6a27cf\") " pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.261775 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z76dv\" (UniqueName: \"kubernetes.io/projected/94c85c6d-1f63-4a43-96a5-850aae6a27cf-kube-api-access-z76dv\") pod \"metallb-operator-controller-manager-78fcc557d5-tzzx8\" (UID: \"94c85c6d-1f63-4a43-96a5-850aae6a27cf\") " pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.268954 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/94c85c6d-1f63-4a43-96a5-850aae6a27cf-apiservice-cert\") pod \"metallb-operator-controller-manager-78fcc557d5-tzzx8\" (UID: \"94c85c6d-1f63-4a43-96a5-850aae6a27cf\") " pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.285095 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/94c85c6d-1f63-4a43-96a5-850aae6a27cf-webhook-cert\") pod \"metallb-operator-controller-manager-78fcc557d5-tzzx8\" (UID: \"94c85c6d-1f63-4a43-96a5-850aae6a27cf\") " pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.298923 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z76dv\" (UniqueName: \"kubernetes.io/projected/94c85c6d-1f63-4a43-96a5-850aae6a27cf-kube-api-access-z76dv\") pod \"metallb-operator-controller-manager-78fcc557d5-tzzx8\" (UID: \"94c85c6d-1f63-4a43-96a5-850aae6a27cf\") " pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.342439 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn"] Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.344158 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.345812 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.345985 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-vjgff" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.346301 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.355060 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn"] Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.444659 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.468852 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2675bece-a200-49ea-a9b0-5e394ae7167d-webhook-cert\") pod \"metallb-operator-webhook-server-6786944b4d-h88pn\" (UID: \"2675bece-a200-49ea-a9b0-5e394ae7167d\") " pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.468903 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq8tt\" (UniqueName: \"kubernetes.io/projected/2675bece-a200-49ea-a9b0-5e394ae7167d-kube-api-access-mq8tt\") pod \"metallb-operator-webhook-server-6786944b4d-h88pn\" (UID: \"2675bece-a200-49ea-a9b0-5e394ae7167d\") " pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.469628 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2675bece-a200-49ea-a9b0-5e394ae7167d-apiservice-cert\") pod \"metallb-operator-webhook-server-6786944b4d-h88pn\" (UID: \"2675bece-a200-49ea-a9b0-5e394ae7167d\") " pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.571196 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2675bece-a200-49ea-a9b0-5e394ae7167d-apiservice-cert\") pod \"metallb-operator-webhook-server-6786944b4d-h88pn\" (UID: \"2675bece-a200-49ea-a9b0-5e394ae7167d\") " pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.571273 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2675bece-a200-49ea-a9b0-5e394ae7167d-webhook-cert\") pod \"metallb-operator-webhook-server-6786944b4d-h88pn\" (UID: \"2675bece-a200-49ea-a9b0-5e394ae7167d\") " pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.571300 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq8tt\" (UniqueName: \"kubernetes.io/projected/2675bece-a200-49ea-a9b0-5e394ae7167d-kube-api-access-mq8tt\") pod \"metallb-operator-webhook-server-6786944b4d-h88pn\" (UID: \"2675bece-a200-49ea-a9b0-5e394ae7167d\") " pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.578047 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2675bece-a200-49ea-a9b0-5e394ae7167d-apiservice-cert\") pod \"metallb-operator-webhook-server-6786944b4d-h88pn\" (UID: \"2675bece-a200-49ea-a9b0-5e394ae7167d\") " pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.578134 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2675bece-a200-49ea-a9b0-5e394ae7167d-webhook-cert\") pod \"metallb-operator-webhook-server-6786944b4d-h88pn\" (UID: \"2675bece-a200-49ea-a9b0-5e394ae7167d\") " pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.603433 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq8tt\" (UniqueName: \"kubernetes.io/projected/2675bece-a200-49ea-a9b0-5e394ae7167d-kube-api-access-mq8tt\") pod \"metallb-operator-webhook-server-6786944b4d-h88pn\" (UID: \"2675bece-a200-49ea-a9b0-5e394ae7167d\") " pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" Nov 28 17:16:39 crc kubenswrapper[5024]: I1128 17:16:39.680709 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" Nov 28 17:16:40 crc kubenswrapper[5024]: I1128 17:16:40.039744 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8"] Nov 28 17:16:40 crc kubenswrapper[5024]: W1128 17:16:40.044910 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94c85c6d_1f63_4a43_96a5_850aae6a27cf.slice/crio-568efd6d907df584f32f1e21a5bacb588cbeee6ed8fd9cfe0aaa83ffb9c6f302 WatchSource:0}: Error finding container 568efd6d907df584f32f1e21a5bacb588cbeee6ed8fd9cfe0aaa83ffb9c6f302: Status 404 returned error can't find the container with id 568efd6d907df584f32f1e21a5bacb588cbeee6ed8fd9cfe0aaa83ffb9c6f302 Nov 28 17:16:40 crc kubenswrapper[5024]: I1128 17:16:40.181885 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn"] Nov 28 17:16:40 crc kubenswrapper[5024]: W1128 17:16:40.184041 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2675bece_a200_49ea_a9b0_5e394ae7167d.slice/crio-6a2ae0736c8ff2504bf6c510184f5fc15013944dcf61c795f5229d7e0bfbe9af WatchSource:0}: Error finding container 6a2ae0736c8ff2504bf6c510184f5fc15013944dcf61c795f5229d7e0bfbe9af: Status 404 returned error can't find the container with id 6a2ae0736c8ff2504bf6c510184f5fc15013944dcf61c795f5229d7e0bfbe9af Nov 28 17:16:40 crc kubenswrapper[5024]: I1128 17:16:40.886526 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" event={"ID":"94c85c6d-1f63-4a43-96a5-850aae6a27cf","Type":"ContainerStarted","Data":"568efd6d907df584f32f1e21a5bacb588cbeee6ed8fd9cfe0aaa83ffb9c6f302"} Nov 28 17:16:40 crc kubenswrapper[5024]: I1128 17:16:40.888778 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" event={"ID":"2675bece-a200-49ea-a9b0-5e394ae7167d","Type":"ContainerStarted","Data":"6a2ae0736c8ff2504bf6c510184f5fc15013944dcf61c795f5229d7e0bfbe9af"} Nov 28 17:16:45 crc kubenswrapper[5024]: I1128 17:16:45.947594 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" event={"ID":"94c85c6d-1f63-4a43-96a5-850aae6a27cf","Type":"ContainerStarted","Data":"10472599bfb2c572291a94a632fca963ba7253c59bfa8ba652c2ad7805525dd1"} Nov 28 17:16:45 crc kubenswrapper[5024]: I1128 17:16:45.948240 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" Nov 28 17:16:45 crc kubenswrapper[5024]: I1128 17:16:45.953788 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" event={"ID":"2675bece-a200-49ea-a9b0-5e394ae7167d","Type":"ContainerStarted","Data":"d77bdf0df7440ef2a107788a52c17b6deca26283c9b09a560bff4b2e6c594adb"} Nov 28 17:16:45 crc kubenswrapper[5024]: I1128 17:16:45.954805 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" Nov 28 17:16:45 crc kubenswrapper[5024]: I1128 17:16:45.974820 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" podStartSLOduration=1.324900767 podStartE2EDuration="6.974796991s" podCreationTimestamp="2025-11-28 17:16:39 +0000 UTC" firstStartedPulling="2025-11-28 17:16:40.047545154 +0000 UTC m=+1102.096466059" lastFinishedPulling="2025-11-28 17:16:45.697441378 +0000 UTC m=+1107.746362283" observedRunningTime="2025-11-28 17:16:45.971768864 +0000 UTC m=+1108.020689769" watchObservedRunningTime="2025-11-28 17:16:45.974796991 +0000 UTC m=+1108.023717896" Nov 28 17:16:46 crc kubenswrapper[5024]: I1128 17:16:46.016531 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" podStartSLOduration=1.4884370009999999 podStartE2EDuration="7.016507894s" podCreationTimestamp="2025-11-28 17:16:39 +0000 UTC" firstStartedPulling="2025-11-28 17:16:40.187068625 +0000 UTC m=+1102.235989530" lastFinishedPulling="2025-11-28 17:16:45.715139518 +0000 UTC m=+1107.764060423" observedRunningTime="2025-11-28 17:16:46.009893133 +0000 UTC m=+1108.058814058" watchObservedRunningTime="2025-11-28 17:16:46.016507894 +0000 UTC m=+1108.065428799" Nov 28 17:16:59 crc kubenswrapper[5024]: I1128 17:16:59.686596 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" Nov 28 17:17:07 crc kubenswrapper[5024]: I1128 17:17:07.564869 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:17:07 crc kubenswrapper[5024]: I1128 17:17:07.565517 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:17:19 crc kubenswrapper[5024]: I1128 17:17:19.447032 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-78fcc557d5-tzzx8" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.145813 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fbhkx"] Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.150331 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.153445 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.153775 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.153799 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d"] Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.155714 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.157365 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-ztd47" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.157717 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.173678 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d"] Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.243553 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njtbv\" (UniqueName: \"kubernetes.io/projected/63ee2602-779a-4f8d-89e8-e741417fcba9-kube-api-access-njtbv\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.243626 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/63ee2602-779a-4f8d-89e8-e741417fcba9-frr-sockets\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.243742 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/63ee2602-779a-4f8d-89e8-e741417fcba9-frr-conf\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.243908 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ee2602-779a-4f8d-89e8-e741417fcba9-metrics-certs\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.244000 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/63ee2602-779a-4f8d-89e8-e741417fcba9-reloader\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.244233 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/63ee2602-779a-4f8d-89e8-e741417fcba9-frr-startup\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.244512 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn6kq\" (UniqueName: \"kubernetes.io/projected/0b9fbfa7-b944-4a28-b32e-011324bf44b7-kube-api-access-mn6kq\") pod \"frr-k8s-webhook-server-7fcb986d4-8v44d\" (UID: \"0b9fbfa7-b944-4a28-b32e-011324bf44b7\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.244696 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b9fbfa7-b944-4a28-b32e-011324bf44b7-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-8v44d\" (UID: \"0b9fbfa7-b944-4a28-b32e-011324bf44b7\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.244871 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/63ee2602-779a-4f8d-89e8-e741417fcba9-metrics\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.249383 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-kwp5s"] Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.250931 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kwp5s" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.255442 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.255466 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-52rwj" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.255464 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.255566 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.282724 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-gh2lw"] Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.284373 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-gh2lw" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.286311 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.298104 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-gh2lw"] Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.346910 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njtbv\" (UniqueName: \"kubernetes.io/projected/63ee2602-779a-4f8d-89e8-e741417fcba9-kube-api-access-njtbv\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.346977 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/63ee2602-779a-4f8d-89e8-e741417fcba9-frr-sockets\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.347004 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/63ee2602-779a-4f8d-89e8-e741417fcba9-frr-conf\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.347075 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-memberlist\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.347102 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ee2602-779a-4f8d-89e8-e741417fcba9-metrics-certs\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.347138 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/63ee2602-779a-4f8d-89e8-e741417fcba9-reloader\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.347172 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/63ee2602-779a-4f8d-89e8-e741417fcba9-frr-startup\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.347209 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn6kq\" (UniqueName: \"kubernetes.io/projected/0b9fbfa7-b944-4a28-b32e-011324bf44b7-kube-api-access-mn6kq\") pod \"frr-k8s-webhook-server-7fcb986d4-8v44d\" (UID: \"0b9fbfa7-b944-4a28-b32e-011324bf44b7\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.347251 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqwjr\" (UniqueName: \"kubernetes.io/projected/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-kube-api-access-pqwjr\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.347287 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b9fbfa7-b944-4a28-b32e-011324bf44b7-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-8v44d\" (UID: \"0b9fbfa7-b944-4a28-b32e-011324bf44b7\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.347312 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-metallb-excludel2\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.347346 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-metrics-certs\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.347393 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/63ee2602-779a-4f8d-89e8-e741417fcba9-metrics\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: E1128 17:17:20.347832 5024 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 28 17:17:20 crc kubenswrapper[5024]: E1128 17:17:20.347991 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ee2602-779a-4f8d-89e8-e741417fcba9-metrics-certs podName:63ee2602-779a-4f8d-89e8-e741417fcba9 nodeName:}" failed. No retries permitted until 2025-11-28 17:17:20.847969854 +0000 UTC m=+1142.896890949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ee2602-779a-4f8d-89e8-e741417fcba9-metrics-certs") pod "frr-k8s-fbhkx" (UID: "63ee2602-779a-4f8d-89e8-e741417fcba9") : secret "frr-k8s-certs-secret" not found Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.348668 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/63ee2602-779a-4f8d-89e8-e741417fcba9-frr-startup\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.348905 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/63ee2602-779a-4f8d-89e8-e741417fcba9-metrics\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.349105 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/63ee2602-779a-4f8d-89e8-e741417fcba9-frr-sockets\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.349206 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/63ee2602-779a-4f8d-89e8-e741417fcba9-frr-conf\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.349395 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/63ee2602-779a-4f8d-89e8-e741417fcba9-reloader\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.356705 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b9fbfa7-b944-4a28-b32e-011324bf44b7-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-8v44d\" (UID: \"0b9fbfa7-b944-4a28-b32e-011324bf44b7\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.366795 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njtbv\" (UniqueName: \"kubernetes.io/projected/63ee2602-779a-4f8d-89e8-e741417fcba9-kube-api-access-njtbv\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.369415 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn6kq\" (UniqueName: \"kubernetes.io/projected/0b9fbfa7-b944-4a28-b32e-011324bf44b7-kube-api-access-mn6kq\") pod \"frr-k8s-webhook-server-7fcb986d4-8v44d\" (UID: \"0b9fbfa7-b944-4a28-b32e-011324bf44b7\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.449289 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqwjr\" (UniqueName: \"kubernetes.io/projected/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-kube-api-access-pqwjr\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.449367 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-metallb-excludel2\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.449410 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a5e4911-39d4-47fb-84f6-b7382b5d3d0c-metrics-certs\") pod \"controller-f8648f98b-gh2lw\" (UID: \"7a5e4911-39d4-47fb-84f6-b7382b5d3d0c\") " pod="metallb-system/controller-f8648f98b-gh2lw" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.449434 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-metrics-certs\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.449504 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a5e4911-39d4-47fb-84f6-b7382b5d3d0c-cert\") pod \"controller-f8648f98b-gh2lw\" (UID: \"7a5e4911-39d4-47fb-84f6-b7382b5d3d0c\") " pod="metallb-system/controller-f8648f98b-gh2lw" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.449533 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b4s6\" (UniqueName: \"kubernetes.io/projected/7a5e4911-39d4-47fb-84f6-b7382b5d3d0c-kube-api-access-6b4s6\") pod \"controller-f8648f98b-gh2lw\" (UID: \"7a5e4911-39d4-47fb-84f6-b7382b5d3d0c\") " pod="metallb-system/controller-f8648f98b-gh2lw" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.449649 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-memberlist\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:20 crc kubenswrapper[5024]: E1128 17:17:20.449854 5024 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 17:17:20 crc kubenswrapper[5024]: E1128 17:17:20.449926 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-memberlist podName:a57fb6ba-d2d8-4e51-8960-a1a15e92c950 nodeName:}" failed. No retries permitted until 2025-11-28 17:17:20.949895702 +0000 UTC m=+1142.998816607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-memberlist") pod "speaker-kwp5s" (UID: "a57fb6ba-d2d8-4e51-8960-a1a15e92c950") : secret "metallb-memberlist" not found Nov 28 17:17:20 crc kubenswrapper[5024]: E1128 17:17:20.450856 5024 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 28 17:17:20 crc kubenswrapper[5024]: E1128 17:17:20.450990 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-metrics-certs podName:a57fb6ba-d2d8-4e51-8960-a1a15e92c950 nodeName:}" failed. No retries permitted until 2025-11-28 17:17:20.950970183 +0000 UTC m=+1142.999891088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-metrics-certs") pod "speaker-kwp5s" (UID: "a57fb6ba-d2d8-4e51-8960-a1a15e92c950") : secret "speaker-certs-secret" not found Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.451111 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-metallb-excludel2\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.470088 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqwjr\" (UniqueName: \"kubernetes.io/projected/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-kube-api-access-pqwjr\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.487487 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.551291 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a5e4911-39d4-47fb-84f6-b7382b5d3d0c-metrics-certs\") pod \"controller-f8648f98b-gh2lw\" (UID: \"7a5e4911-39d4-47fb-84f6-b7382b5d3d0c\") " pod="metallb-system/controller-f8648f98b-gh2lw" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.551368 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a5e4911-39d4-47fb-84f6-b7382b5d3d0c-cert\") pod \"controller-f8648f98b-gh2lw\" (UID: \"7a5e4911-39d4-47fb-84f6-b7382b5d3d0c\") " pod="metallb-system/controller-f8648f98b-gh2lw" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.551394 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b4s6\" (UniqueName: \"kubernetes.io/projected/7a5e4911-39d4-47fb-84f6-b7382b5d3d0c-kube-api-access-6b4s6\") pod \"controller-f8648f98b-gh2lw\" (UID: \"7a5e4911-39d4-47fb-84f6-b7382b5d3d0c\") " pod="metallb-system/controller-f8648f98b-gh2lw" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.553747 5024 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.556071 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a5e4911-39d4-47fb-84f6-b7382b5d3d0c-metrics-certs\") pod \"controller-f8648f98b-gh2lw\" (UID: \"7a5e4911-39d4-47fb-84f6-b7382b5d3d0c\") " pod="metallb-system/controller-f8648f98b-gh2lw" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.567999 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a5e4911-39d4-47fb-84f6-b7382b5d3d0c-cert\") pod \"controller-f8648f98b-gh2lw\" (UID: \"7a5e4911-39d4-47fb-84f6-b7382b5d3d0c\") " pod="metallb-system/controller-f8648f98b-gh2lw" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.568713 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b4s6\" (UniqueName: \"kubernetes.io/projected/7a5e4911-39d4-47fb-84f6-b7382b5d3d0c-kube-api-access-6b4s6\") pod \"controller-f8648f98b-gh2lw\" (UID: \"7a5e4911-39d4-47fb-84f6-b7382b5d3d0c\") " pod="metallb-system/controller-f8648f98b-gh2lw" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.600918 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-gh2lw" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.857571 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ee2602-779a-4f8d-89e8-e741417fcba9-metrics-certs\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.861701 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ee2602-779a-4f8d-89e8-e741417fcba9-metrics-certs\") pod \"frr-k8s-fbhkx\" (UID: \"63ee2602-779a-4f8d-89e8-e741417fcba9\") " pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.916431 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d"] Nov 28 17:17:20 crc kubenswrapper[5024]: W1128 17:17:20.920249 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b9fbfa7_b944_4a28_b32e_011324bf44b7.slice/crio-61da36e73457c4def39aa7b6e7a54d2ba131e9f1722d27d320e94c494a5d7d0a WatchSource:0}: Error finding container 61da36e73457c4def39aa7b6e7a54d2ba131e9f1722d27d320e94c494a5d7d0a: Status 404 returned error can't find the container with id 61da36e73457c4def39aa7b6e7a54d2ba131e9f1722d27d320e94c494a5d7d0a Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.959049 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-metrics-certs\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.959197 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-memberlist\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:20 crc kubenswrapper[5024]: E1128 17:17:20.959439 5024 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 17:17:20 crc kubenswrapper[5024]: E1128 17:17:20.959531 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-memberlist podName:a57fb6ba-d2d8-4e51-8960-a1a15e92c950 nodeName:}" failed. No retries permitted until 2025-11-28 17:17:21.959513699 +0000 UTC m=+1144.008434604 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-memberlist") pod "speaker-kwp5s" (UID: "a57fb6ba-d2d8-4e51-8960-a1a15e92c950") : secret "metallb-memberlist" not found Nov 28 17:17:20 crc kubenswrapper[5024]: I1128 17:17:20.963493 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-metrics-certs\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:21 crc kubenswrapper[5024]: W1128 17:17:21.052385 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a5e4911_39d4_47fb_84f6_b7382b5d3d0c.slice/crio-71ada615b07267e277eaf0464f5cd90d92a1f53dbcd14ae0a653c3226d5ceb70 WatchSource:0}: Error finding container 71ada615b07267e277eaf0464f5cd90d92a1f53dbcd14ae0a653c3226d5ceb70: Status 404 returned error can't find the container with id 71ada615b07267e277eaf0464f5cd90d92a1f53dbcd14ae0a653c3226d5ceb70 Nov 28 17:17:21 crc kubenswrapper[5024]: I1128 17:17:21.054006 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-gh2lw"] Nov 28 17:17:21 crc kubenswrapper[5024]: I1128 17:17:21.076719 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:21 crc kubenswrapper[5024]: I1128 17:17:21.222744 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d" event={"ID":"0b9fbfa7-b944-4a28-b32e-011324bf44b7","Type":"ContainerStarted","Data":"61da36e73457c4def39aa7b6e7a54d2ba131e9f1722d27d320e94c494a5d7d0a"} Nov 28 17:17:21 crc kubenswrapper[5024]: I1128 17:17:21.224426 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-gh2lw" event={"ID":"7a5e4911-39d4-47fb-84f6-b7382b5d3d0c","Type":"ContainerStarted","Data":"71ada615b07267e277eaf0464f5cd90d92a1f53dbcd14ae0a653c3226d5ceb70"} Nov 28 17:17:21 crc kubenswrapper[5024]: I1128 17:17:21.979935 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-memberlist\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:21 crc kubenswrapper[5024]: I1128 17:17:21.985282 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a57fb6ba-d2d8-4e51-8960-a1a15e92c950-memberlist\") pod \"speaker-kwp5s\" (UID: \"a57fb6ba-d2d8-4e51-8960-a1a15e92c950\") " pod="metallb-system/speaker-kwp5s" Nov 28 17:17:22 crc kubenswrapper[5024]: I1128 17:17:22.075895 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kwp5s" Nov 28 17:17:22 crc kubenswrapper[5024]: W1128 17:17:22.101832 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda57fb6ba_d2d8_4e51_8960_a1a15e92c950.slice/crio-817c07601c856162a4e38875d0db680886fd051fedf555ffa515cecbf8523c7a WatchSource:0}: Error finding container 817c07601c856162a4e38875d0db680886fd051fedf555ffa515cecbf8523c7a: Status 404 returned error can't find the container with id 817c07601c856162a4e38875d0db680886fd051fedf555ffa515cecbf8523c7a Nov 28 17:17:22 crc kubenswrapper[5024]: I1128 17:17:22.232980 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fbhkx" event={"ID":"63ee2602-779a-4f8d-89e8-e741417fcba9","Type":"ContainerStarted","Data":"38ced274fcd88be59b7b5ef60b17dee990554b010edfa56869cf2584d0c36ae1"} Nov 28 17:17:22 crc kubenswrapper[5024]: I1128 17:17:22.234227 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kwp5s" event={"ID":"a57fb6ba-d2d8-4e51-8960-a1a15e92c950","Type":"ContainerStarted","Data":"817c07601c856162a4e38875d0db680886fd051fedf555ffa515cecbf8523c7a"} Nov 28 17:17:22 crc kubenswrapper[5024]: I1128 17:17:22.236573 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-gh2lw" event={"ID":"7a5e4911-39d4-47fb-84f6-b7382b5d3d0c","Type":"ContainerStarted","Data":"b9caf240047bfe4d46ec1460e49597dff65384ab60b791e5ecd64c59ac8f162a"} Nov 28 17:17:22 crc kubenswrapper[5024]: I1128 17:17:22.236605 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-gh2lw" event={"ID":"7a5e4911-39d4-47fb-84f6-b7382b5d3d0c","Type":"ContainerStarted","Data":"cede055869adcda237aa4ae8fcd7a26e4498b160d0487b79d9b0cd3de8713223"} Nov 28 17:17:22 crc kubenswrapper[5024]: I1128 17:17:22.236846 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-gh2lw" Nov 28 17:17:22 crc kubenswrapper[5024]: I1128 17:17:22.257935 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-gh2lw" podStartSLOduration=2.25790892 podStartE2EDuration="2.25790892s" podCreationTimestamp="2025-11-28 17:17:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:17:22.251983209 +0000 UTC m=+1144.300904114" watchObservedRunningTime="2025-11-28 17:17:22.25790892 +0000 UTC m=+1144.306829835" Nov 28 17:17:23 crc kubenswrapper[5024]: I1128 17:17:23.249099 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kwp5s" event={"ID":"a57fb6ba-d2d8-4e51-8960-a1a15e92c950","Type":"ContainerStarted","Data":"c7f5a5f7753d7af462081de0f658386abb2c8b9729461a0b959433c362374ec3"} Nov 28 17:17:23 crc kubenswrapper[5024]: I1128 17:17:23.249427 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kwp5s" event={"ID":"a57fb6ba-d2d8-4e51-8960-a1a15e92c950","Type":"ContainerStarted","Data":"4cfdf9154eda180a5fbc642829bd5f7e9903aaf3a2df8ed0dea48d079385a41b"} Nov 28 17:17:23 crc kubenswrapper[5024]: I1128 17:17:23.249447 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-kwp5s" Nov 28 17:17:23 crc kubenswrapper[5024]: I1128 17:17:23.275710 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-kwp5s" podStartSLOduration=3.275689943 podStartE2EDuration="3.275689943s" podCreationTimestamp="2025-11-28 17:17:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:17:23.271180833 +0000 UTC m=+1145.320101748" watchObservedRunningTime="2025-11-28 17:17:23.275689943 +0000 UTC m=+1145.324610848" Nov 28 17:17:29 crc kubenswrapper[5024]: I1128 17:17:29.303043 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d" event={"ID":"0b9fbfa7-b944-4a28-b32e-011324bf44b7","Type":"ContainerStarted","Data":"8e50394403a87bd87e4ac6f63996f4994de4cd8e0e9e182266780fcc8ff91425"} Nov 28 17:17:29 crc kubenswrapper[5024]: I1128 17:17:29.303582 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d" Nov 28 17:17:29 crc kubenswrapper[5024]: I1128 17:17:29.304892 5024 generic.go:334] "Generic (PLEG): container finished" podID="63ee2602-779a-4f8d-89e8-e741417fcba9" containerID="7efaf9ac0ab0b5c5d7be8201ea6b56da968037dd76f83dd1cd688e835e196f51" exitCode=0 Nov 28 17:17:29 crc kubenswrapper[5024]: I1128 17:17:29.304935 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fbhkx" event={"ID":"63ee2602-779a-4f8d-89e8-e741417fcba9","Type":"ContainerDied","Data":"7efaf9ac0ab0b5c5d7be8201ea6b56da968037dd76f83dd1cd688e835e196f51"} Nov 28 17:17:29 crc kubenswrapper[5024]: I1128 17:17:29.337930 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d" podStartSLOduration=1.5502462449999999 podStartE2EDuration="9.337900811s" podCreationTimestamp="2025-11-28 17:17:20 +0000 UTC" firstStartedPulling="2025-11-28 17:17:20.922615416 +0000 UTC m=+1142.971536321" lastFinishedPulling="2025-11-28 17:17:28.710269982 +0000 UTC m=+1150.759190887" observedRunningTime="2025-11-28 17:17:29.319668225 +0000 UTC m=+1151.368589130" watchObservedRunningTime="2025-11-28 17:17:29.337900811 +0000 UTC m=+1151.386821716" Nov 28 17:17:30 crc kubenswrapper[5024]: I1128 17:17:30.314130 5024 generic.go:334] "Generic (PLEG): container finished" podID="63ee2602-779a-4f8d-89e8-e741417fcba9" containerID="27bedc9d3fd15cb29e7a2835346ee3373923f3637febdd8e83f8605a1512c411" exitCode=0 Nov 28 17:17:30 crc kubenswrapper[5024]: I1128 17:17:30.314191 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fbhkx" event={"ID":"63ee2602-779a-4f8d-89e8-e741417fcba9","Type":"ContainerDied","Data":"27bedc9d3fd15cb29e7a2835346ee3373923f3637febdd8e83f8605a1512c411"} Nov 28 17:17:31 crc kubenswrapper[5024]: I1128 17:17:31.324299 5024 generic.go:334] "Generic (PLEG): container finished" podID="63ee2602-779a-4f8d-89e8-e741417fcba9" containerID="c2211778dec847787c1ce017b7ad402416e5f2b7d6b0f60b4a71a22550ad3bc0" exitCode=0 Nov 28 17:17:31 crc kubenswrapper[5024]: I1128 17:17:31.324340 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fbhkx" event={"ID":"63ee2602-779a-4f8d-89e8-e741417fcba9","Type":"ContainerDied","Data":"c2211778dec847787c1ce017b7ad402416e5f2b7d6b0f60b4a71a22550ad3bc0"} Nov 28 17:17:32 crc kubenswrapper[5024]: I1128 17:17:32.080412 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-kwp5s" Nov 28 17:17:32 crc kubenswrapper[5024]: I1128 17:17:32.337710 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fbhkx" event={"ID":"63ee2602-779a-4f8d-89e8-e741417fcba9","Type":"ContainerStarted","Data":"bfd4acf033b9fc4e787d079c62c92a0b12347e1c6b0e1014d45e7faa45dfd1bb"} Nov 28 17:17:32 crc kubenswrapper[5024]: I1128 17:17:32.337757 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fbhkx" event={"ID":"63ee2602-779a-4f8d-89e8-e741417fcba9","Type":"ContainerStarted","Data":"8a7cee247fdb9066d40c1bde0d7b944815c1a598641c94469162ca614bab778f"} Nov 28 17:17:32 crc kubenswrapper[5024]: I1128 17:17:32.337769 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fbhkx" event={"ID":"63ee2602-779a-4f8d-89e8-e741417fcba9","Type":"ContainerStarted","Data":"a923f9408ec40cadf9c6cc900e3f3842deabd8bc76160faff9419cd511b0153f"} Nov 28 17:17:32 crc kubenswrapper[5024]: I1128 17:17:32.337975 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fbhkx" event={"ID":"63ee2602-779a-4f8d-89e8-e741417fcba9","Type":"ContainerStarted","Data":"9a5e6fcd8519568a1efd3ada222fd34e9c6d6dc6eac92aa159e516023e930596"} Nov 28 17:17:32 crc kubenswrapper[5024]: I1128 17:17:32.337985 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fbhkx" event={"ID":"63ee2602-779a-4f8d-89e8-e741417fcba9","Type":"ContainerStarted","Data":"816321b9fb10dca1b28927705413e9194e358d560a2a3105f8f3d1471dee3d75"} Nov 28 17:17:33 crc kubenswrapper[5024]: I1128 17:17:33.356010 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fbhkx" event={"ID":"63ee2602-779a-4f8d-89e8-e741417fcba9","Type":"ContainerStarted","Data":"fd184b5f38d9e83cdd01e71b7aafabe6fa04613ee42dc709d72dd9de98f1c15b"} Nov 28 17:17:33 crc kubenswrapper[5024]: I1128 17:17:33.356202 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:33 crc kubenswrapper[5024]: I1128 17:17:33.387070 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fbhkx" podStartSLOduration=6.096424349 podStartE2EDuration="13.387052121s" podCreationTimestamp="2025-11-28 17:17:20 +0000 UTC" firstStartedPulling="2025-11-28 17:17:21.399603682 +0000 UTC m=+1143.448524587" lastFinishedPulling="2025-11-28 17:17:28.690231454 +0000 UTC m=+1150.739152359" observedRunningTime="2025-11-28 17:17:33.379350509 +0000 UTC m=+1155.428271414" watchObservedRunningTime="2025-11-28 17:17:33.387052121 +0000 UTC m=+1155.435973026" Nov 28 17:17:34 crc kubenswrapper[5024]: I1128 17:17:34.947474 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-4qm7m"] Nov 28 17:17:34 crc kubenswrapper[5024]: I1128 17:17:34.948791 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4qm7m" Nov 28 17:17:34 crc kubenswrapper[5024]: I1128 17:17:34.953582 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 28 17:17:34 crc kubenswrapper[5024]: I1128 17:17:34.956590 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7n8vg" Nov 28 17:17:34 crc kubenswrapper[5024]: I1128 17:17:34.956590 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 28 17:17:34 crc kubenswrapper[5024]: I1128 17:17:34.958813 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4qm7m"] Nov 28 17:17:35 crc kubenswrapper[5024]: I1128 17:17:35.038138 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkh2x\" (UniqueName: \"kubernetes.io/projected/813e4661-d5be-4586-9ba8-041425261f02-kube-api-access-dkh2x\") pod \"openstack-operator-index-4qm7m\" (UID: \"813e4661-d5be-4586-9ba8-041425261f02\") " pod="openstack-operators/openstack-operator-index-4qm7m" Nov 28 17:17:35 crc kubenswrapper[5024]: I1128 17:17:35.140361 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkh2x\" (UniqueName: \"kubernetes.io/projected/813e4661-d5be-4586-9ba8-041425261f02-kube-api-access-dkh2x\") pod \"openstack-operator-index-4qm7m\" (UID: \"813e4661-d5be-4586-9ba8-041425261f02\") " pod="openstack-operators/openstack-operator-index-4qm7m" Nov 28 17:17:35 crc kubenswrapper[5024]: I1128 17:17:35.162422 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkh2x\" (UniqueName: \"kubernetes.io/projected/813e4661-d5be-4586-9ba8-041425261f02-kube-api-access-dkh2x\") pod \"openstack-operator-index-4qm7m\" (UID: \"813e4661-d5be-4586-9ba8-041425261f02\") " pod="openstack-operators/openstack-operator-index-4qm7m" Nov 28 17:17:35 crc kubenswrapper[5024]: I1128 17:17:35.276282 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4qm7m" Nov 28 17:17:35 crc kubenswrapper[5024]: I1128 17:17:35.681952 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4qm7m"] Nov 28 17:17:35 crc kubenswrapper[5024]: W1128 17:17:35.693812 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod813e4661_d5be_4586_9ba8_041425261f02.slice/crio-5e63b7b6135ddd30e2403732eac9ac30ff5ee54f2c501cd241ec86be7f61240f WatchSource:0}: Error finding container 5e63b7b6135ddd30e2403732eac9ac30ff5ee54f2c501cd241ec86be7f61240f: Status 404 returned error can't find the container with id 5e63b7b6135ddd30e2403732eac9ac30ff5ee54f2c501cd241ec86be7f61240f Nov 28 17:17:36 crc kubenswrapper[5024]: I1128 17:17:36.078013 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:36 crc kubenswrapper[5024]: I1128 17:17:36.121376 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:36 crc kubenswrapper[5024]: I1128 17:17:36.386696 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4qm7m" event={"ID":"813e4661-d5be-4586-9ba8-041425261f02","Type":"ContainerStarted","Data":"5e63b7b6135ddd30e2403732eac9ac30ff5ee54f2c501cd241ec86be7f61240f"} Nov 28 17:17:37 crc kubenswrapper[5024]: I1128 17:17:37.574660 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:17:37 crc kubenswrapper[5024]: I1128 17:17:37.574751 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:17:38 crc kubenswrapper[5024]: I1128 17:17:38.317650 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-4qm7m"] Nov 28 17:17:38 crc kubenswrapper[5024]: I1128 17:17:38.402745 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4qm7m" event={"ID":"813e4661-d5be-4586-9ba8-041425261f02","Type":"ContainerStarted","Data":"a44cfe292aab5f12e1d91db6918bab6ac0aa233879ad4e5f095e8349de75f58e"} Nov 28 17:17:38 crc kubenswrapper[5024]: I1128 17:17:38.422406 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-4qm7m" podStartSLOduration=2.063131668 podStartE2EDuration="4.422385583s" podCreationTimestamp="2025-11-28 17:17:34 +0000 UTC" firstStartedPulling="2025-11-28 17:17:35.69645489 +0000 UTC m=+1157.745375795" lastFinishedPulling="2025-11-28 17:17:38.055708805 +0000 UTC m=+1160.104629710" observedRunningTime="2025-11-28 17:17:38.417694697 +0000 UTC m=+1160.466615622" watchObservedRunningTime="2025-11-28 17:17:38.422385583 +0000 UTC m=+1160.471306488" Nov 28 17:17:38 crc kubenswrapper[5024]: I1128 17:17:38.920043 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pvv5m"] Nov 28 17:17:38 crc kubenswrapper[5024]: I1128 17:17:38.921147 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pvv5m" Nov 28 17:17:38 crc kubenswrapper[5024]: I1128 17:17:38.941559 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pvv5m"] Nov 28 17:17:39 crc kubenswrapper[5024]: I1128 17:17:39.016894 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2rtc\" (UniqueName: \"kubernetes.io/projected/6dfe3a90-7ca0-4e52-9c18-4cb3f828aca6-kube-api-access-x2rtc\") pod \"openstack-operator-index-pvv5m\" (UID: \"6dfe3a90-7ca0-4e52-9c18-4cb3f828aca6\") " pod="openstack-operators/openstack-operator-index-pvv5m" Nov 28 17:17:39 crc kubenswrapper[5024]: I1128 17:17:39.118622 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2rtc\" (UniqueName: \"kubernetes.io/projected/6dfe3a90-7ca0-4e52-9c18-4cb3f828aca6-kube-api-access-x2rtc\") pod \"openstack-operator-index-pvv5m\" (UID: \"6dfe3a90-7ca0-4e52-9c18-4cb3f828aca6\") " pod="openstack-operators/openstack-operator-index-pvv5m" Nov 28 17:17:39 crc kubenswrapper[5024]: I1128 17:17:39.149168 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2rtc\" (UniqueName: \"kubernetes.io/projected/6dfe3a90-7ca0-4e52-9c18-4cb3f828aca6-kube-api-access-x2rtc\") pod \"openstack-operator-index-pvv5m\" (UID: \"6dfe3a90-7ca0-4e52-9c18-4cb3f828aca6\") " pod="openstack-operators/openstack-operator-index-pvv5m" Nov 28 17:17:39 crc kubenswrapper[5024]: I1128 17:17:39.244738 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pvv5m" Nov 28 17:17:39 crc kubenswrapper[5024]: I1128 17:17:39.412456 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-4qm7m" podUID="813e4661-d5be-4586-9ba8-041425261f02" containerName="registry-server" containerID="cri-o://a44cfe292aab5f12e1d91db6918bab6ac0aa233879ad4e5f095e8349de75f58e" gracePeriod=2 Nov 28 17:17:39 crc kubenswrapper[5024]: I1128 17:17:39.651411 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pvv5m"] Nov 28 17:17:39 crc kubenswrapper[5024]: I1128 17:17:39.748558 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4qm7m" Nov 28 17:17:39 crc kubenswrapper[5024]: I1128 17:17:39.833798 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkh2x\" (UniqueName: \"kubernetes.io/projected/813e4661-d5be-4586-9ba8-041425261f02-kube-api-access-dkh2x\") pod \"813e4661-d5be-4586-9ba8-041425261f02\" (UID: \"813e4661-d5be-4586-9ba8-041425261f02\") " Nov 28 17:17:39 crc kubenswrapper[5024]: I1128 17:17:39.839964 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/813e4661-d5be-4586-9ba8-041425261f02-kube-api-access-dkh2x" (OuterVolumeSpecName: "kube-api-access-dkh2x") pod "813e4661-d5be-4586-9ba8-041425261f02" (UID: "813e4661-d5be-4586-9ba8-041425261f02"). InnerVolumeSpecName "kube-api-access-dkh2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:17:39 crc kubenswrapper[5024]: I1128 17:17:39.936535 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkh2x\" (UniqueName: \"kubernetes.io/projected/813e4661-d5be-4586-9ba8-041425261f02-kube-api-access-dkh2x\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.420625 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pvv5m" event={"ID":"6dfe3a90-7ca0-4e52-9c18-4cb3f828aca6","Type":"ContainerStarted","Data":"a94b86441b8643c015a4bc80e4e99c3457a9c8dc642be220c7f624bcc58838a4"} Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.420949 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pvv5m" event={"ID":"6dfe3a90-7ca0-4e52-9c18-4cb3f828aca6","Type":"ContainerStarted","Data":"d5ce3167f01b4d0ec842c9fb920cd9c25e141b900202f25b4c9305bd12a9e16f"} Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.422236 5024 generic.go:334] "Generic (PLEG): container finished" podID="813e4661-d5be-4586-9ba8-041425261f02" containerID="a44cfe292aab5f12e1d91db6918bab6ac0aa233879ad4e5f095e8349de75f58e" exitCode=0 Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.422271 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4qm7m" event={"ID":"813e4661-d5be-4586-9ba8-041425261f02","Type":"ContainerDied","Data":"a44cfe292aab5f12e1d91db6918bab6ac0aa233879ad4e5f095e8349de75f58e"} Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.422308 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4qm7m" Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.422332 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4qm7m" event={"ID":"813e4661-d5be-4586-9ba8-041425261f02","Type":"ContainerDied","Data":"5e63b7b6135ddd30e2403732eac9ac30ff5ee54f2c501cd241ec86be7f61240f"} Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.422358 5024 scope.go:117] "RemoveContainer" containerID="a44cfe292aab5f12e1d91db6918bab6ac0aa233879ad4e5f095e8349de75f58e" Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.447420 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pvv5m" podStartSLOduration=2.392789009 podStartE2EDuration="2.447385873s" podCreationTimestamp="2025-11-28 17:17:38 +0000 UTC" firstStartedPulling="2025-11-28 17:17:39.659774334 +0000 UTC m=+1161.708695239" lastFinishedPulling="2025-11-28 17:17:39.714371188 +0000 UTC m=+1161.763292103" observedRunningTime="2025-11-28 17:17:40.436312034 +0000 UTC m=+1162.485232949" watchObservedRunningTime="2025-11-28 17:17:40.447385873 +0000 UTC m=+1162.496306778" Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.449812 5024 scope.go:117] "RemoveContainer" containerID="a44cfe292aab5f12e1d91db6918bab6ac0aa233879ad4e5f095e8349de75f58e" Nov 28 17:17:40 crc kubenswrapper[5024]: E1128 17:17:40.450367 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a44cfe292aab5f12e1d91db6918bab6ac0aa233879ad4e5f095e8349de75f58e\": container with ID starting with a44cfe292aab5f12e1d91db6918bab6ac0aa233879ad4e5f095e8349de75f58e not found: ID does not exist" containerID="a44cfe292aab5f12e1d91db6918bab6ac0aa233879ad4e5f095e8349de75f58e" Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.450426 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a44cfe292aab5f12e1d91db6918bab6ac0aa233879ad4e5f095e8349de75f58e"} err="failed to get container status \"a44cfe292aab5f12e1d91db6918bab6ac0aa233879ad4e5f095e8349de75f58e\": rpc error: code = NotFound desc = could not find container \"a44cfe292aab5f12e1d91db6918bab6ac0aa233879ad4e5f095e8349de75f58e\": container with ID starting with a44cfe292aab5f12e1d91db6918bab6ac0aa233879ad4e5f095e8349de75f58e not found: ID does not exist" Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.465011 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-4qm7m"] Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.471259 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-4qm7m"] Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.492929 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-8v44d" Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.511308 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="813e4661-d5be-4586-9ba8-041425261f02" path="/var/lib/kubelet/pods/813e4661-d5be-4586-9ba8-041425261f02/volumes" Nov 28 17:17:40 crc kubenswrapper[5024]: I1128 17:17:40.606712 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-gh2lw" Nov 28 17:17:41 crc kubenswrapper[5024]: I1128 17:17:41.080254 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fbhkx" Nov 28 17:17:49 crc kubenswrapper[5024]: I1128 17:17:49.246684 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-pvv5m" Nov 28 17:17:49 crc kubenswrapper[5024]: I1128 17:17:49.247285 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-pvv5m" Nov 28 17:17:49 crc kubenswrapper[5024]: I1128 17:17:49.274586 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-pvv5m" Nov 28 17:17:49 crc kubenswrapper[5024]: I1128 17:17:49.529994 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-pvv5m" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.181111 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4"] Nov 28 17:17:51 crc kubenswrapper[5024]: E1128 17:17:51.181704 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813e4661-d5be-4586-9ba8-041425261f02" containerName="registry-server" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.181724 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="813e4661-d5be-4586-9ba8-041425261f02" containerName="registry-server" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.181946 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="813e4661-d5be-4586-9ba8-041425261f02" containerName="registry-server" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.183430 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.186148 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-chh5g" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.194483 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4"] Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.267445 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1a26dc6-74d7-4850-9af2-2e136ce1a480-util\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4\" (UID: \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.267533 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64dhm\" (UniqueName: \"kubernetes.io/projected/f1a26dc6-74d7-4850-9af2-2e136ce1a480-kube-api-access-64dhm\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4\" (UID: \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.267804 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1a26dc6-74d7-4850-9af2-2e136ce1a480-bundle\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4\" (UID: \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.369303 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1a26dc6-74d7-4850-9af2-2e136ce1a480-util\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4\" (UID: \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.369427 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64dhm\" (UniqueName: \"kubernetes.io/projected/f1a26dc6-74d7-4850-9af2-2e136ce1a480-kube-api-access-64dhm\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4\" (UID: \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.369556 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1a26dc6-74d7-4850-9af2-2e136ce1a480-bundle\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4\" (UID: \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.369957 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1a26dc6-74d7-4850-9af2-2e136ce1a480-util\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4\" (UID: \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.370114 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1a26dc6-74d7-4850-9af2-2e136ce1a480-bundle\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4\" (UID: \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.388988 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64dhm\" (UniqueName: \"kubernetes.io/projected/f1a26dc6-74d7-4850-9af2-2e136ce1a480-kube-api-access-64dhm\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4\" (UID: \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.500552 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" Nov 28 17:17:51 crc kubenswrapper[5024]: I1128 17:17:51.928592 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4"] Nov 28 17:17:51 crc kubenswrapper[5024]: W1128 17:17:51.933938 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1a26dc6_74d7_4850_9af2_2e136ce1a480.slice/crio-8ba8b8f788ad852a7ff144b24f40b1c8a82d8b189102f0ef681c98bbf73fa667 WatchSource:0}: Error finding container 8ba8b8f788ad852a7ff144b24f40b1c8a82d8b189102f0ef681c98bbf73fa667: Status 404 returned error can't find the container with id 8ba8b8f788ad852a7ff144b24f40b1c8a82d8b189102f0ef681c98bbf73fa667 Nov 28 17:17:52 crc kubenswrapper[5024]: I1128 17:17:52.528389 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" event={"ID":"f1a26dc6-74d7-4850-9af2-2e136ce1a480","Type":"ContainerStarted","Data":"8ba8b8f788ad852a7ff144b24f40b1c8a82d8b189102f0ef681c98bbf73fa667"} Nov 28 17:17:53 crc kubenswrapper[5024]: I1128 17:17:53.536446 5024 generic.go:334] "Generic (PLEG): container finished" podID="f1a26dc6-74d7-4850-9af2-2e136ce1a480" containerID="baf4c66111d8cb01ea7b6b211d6eacd8ee294d0491db74cccde1abc3708ce913" exitCode=0 Nov 28 17:17:53 crc kubenswrapper[5024]: I1128 17:17:53.536514 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" event={"ID":"f1a26dc6-74d7-4850-9af2-2e136ce1a480","Type":"ContainerDied","Data":"baf4c66111d8cb01ea7b6b211d6eacd8ee294d0491db74cccde1abc3708ce913"} Nov 28 17:17:54 crc kubenswrapper[5024]: E1128 17:17:54.079501 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1a26dc6_74d7_4850_9af2_2e136ce1a480.slice/crio-conmon-d858c84065d502f209a855f87897bdeafd14ba1956e2567d8a0995f904beeb33.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1a26dc6_74d7_4850_9af2_2e136ce1a480.slice/crio-d858c84065d502f209a855f87897bdeafd14ba1956e2567d8a0995f904beeb33.scope\": RecentStats: unable to find data in memory cache]" Nov 28 17:17:54 crc kubenswrapper[5024]: I1128 17:17:54.545668 5024 generic.go:334] "Generic (PLEG): container finished" podID="f1a26dc6-74d7-4850-9af2-2e136ce1a480" containerID="d858c84065d502f209a855f87897bdeafd14ba1956e2567d8a0995f904beeb33" exitCode=0 Nov 28 17:17:54 crc kubenswrapper[5024]: I1128 17:17:54.545733 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" event={"ID":"f1a26dc6-74d7-4850-9af2-2e136ce1a480","Type":"ContainerDied","Data":"d858c84065d502f209a855f87897bdeafd14ba1956e2567d8a0995f904beeb33"} Nov 28 17:17:55 crc kubenswrapper[5024]: I1128 17:17:55.556406 5024 generic.go:334] "Generic (PLEG): container finished" podID="f1a26dc6-74d7-4850-9af2-2e136ce1a480" containerID="cdc7a503a852cc4e9de687598b5a9b79daca4a7a5b79facaa69f00507d387537" exitCode=0 Nov 28 17:17:55 crc kubenswrapper[5024]: I1128 17:17:55.556498 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" event={"ID":"f1a26dc6-74d7-4850-9af2-2e136ce1a480","Type":"ContainerDied","Data":"cdc7a503a852cc4e9de687598b5a9b79daca4a7a5b79facaa69f00507d387537"} Nov 28 17:17:56 crc kubenswrapper[5024]: I1128 17:17:56.967593 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" Nov 28 17:17:57 crc kubenswrapper[5024]: I1128 17:17:57.074011 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64dhm\" (UniqueName: \"kubernetes.io/projected/f1a26dc6-74d7-4850-9af2-2e136ce1a480-kube-api-access-64dhm\") pod \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\" (UID: \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\") " Nov 28 17:17:57 crc kubenswrapper[5024]: I1128 17:17:57.074246 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1a26dc6-74d7-4850-9af2-2e136ce1a480-bundle\") pod \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\" (UID: \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\") " Nov 28 17:17:57 crc kubenswrapper[5024]: I1128 17:17:57.074419 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1a26dc6-74d7-4850-9af2-2e136ce1a480-util\") pod \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\" (UID: \"f1a26dc6-74d7-4850-9af2-2e136ce1a480\") " Nov 28 17:17:57 crc kubenswrapper[5024]: I1128 17:17:57.074893 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a26dc6-74d7-4850-9af2-2e136ce1a480-bundle" (OuterVolumeSpecName: "bundle") pod "f1a26dc6-74d7-4850-9af2-2e136ce1a480" (UID: "f1a26dc6-74d7-4850-9af2-2e136ce1a480"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:17:57 crc kubenswrapper[5024]: I1128 17:17:57.080422 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a26dc6-74d7-4850-9af2-2e136ce1a480-kube-api-access-64dhm" (OuterVolumeSpecName: "kube-api-access-64dhm") pod "f1a26dc6-74d7-4850-9af2-2e136ce1a480" (UID: "f1a26dc6-74d7-4850-9af2-2e136ce1a480"). InnerVolumeSpecName "kube-api-access-64dhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:17:57 crc kubenswrapper[5024]: I1128 17:17:57.090659 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a26dc6-74d7-4850-9af2-2e136ce1a480-util" (OuterVolumeSpecName: "util") pod "f1a26dc6-74d7-4850-9af2-2e136ce1a480" (UID: "f1a26dc6-74d7-4850-9af2-2e136ce1a480"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:17:57 crc kubenswrapper[5024]: I1128 17:17:57.177922 5024 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1a26dc6-74d7-4850-9af2-2e136ce1a480-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:57 crc kubenswrapper[5024]: I1128 17:17:57.177963 5024 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1a26dc6-74d7-4850-9af2-2e136ce1a480-util\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:57 crc kubenswrapper[5024]: I1128 17:17:57.177976 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64dhm\" (UniqueName: \"kubernetes.io/projected/f1a26dc6-74d7-4850-9af2-2e136ce1a480-kube-api-access-64dhm\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:57 crc kubenswrapper[5024]: I1128 17:17:57.574759 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" event={"ID":"f1a26dc6-74d7-4850-9af2-2e136ce1a480","Type":"ContainerDied","Data":"8ba8b8f788ad852a7ff144b24f40b1c8a82d8b189102f0ef681c98bbf73fa667"} Nov 28 17:17:57 crc kubenswrapper[5024]: I1128 17:17:57.574802 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ba8b8f788ad852a7ff144b24f40b1c8a82d8b189102f0ef681c98bbf73fa667" Nov 28 17:17:57 crc kubenswrapper[5024]: I1128 17:17:57.574877 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4" Nov 28 17:18:03 crc kubenswrapper[5024]: I1128 17:18:03.020449 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-96cfcb97f-jcxf2"] Nov 28 17:18:03 crc kubenswrapper[5024]: E1128 17:18:03.021326 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a26dc6-74d7-4850-9af2-2e136ce1a480" containerName="pull" Nov 28 17:18:03 crc kubenswrapper[5024]: I1128 17:18:03.021339 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a26dc6-74d7-4850-9af2-2e136ce1a480" containerName="pull" Nov 28 17:18:03 crc kubenswrapper[5024]: E1128 17:18:03.021357 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a26dc6-74d7-4850-9af2-2e136ce1a480" containerName="util" Nov 28 17:18:03 crc kubenswrapper[5024]: I1128 17:18:03.021362 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a26dc6-74d7-4850-9af2-2e136ce1a480" containerName="util" Nov 28 17:18:03 crc kubenswrapper[5024]: E1128 17:18:03.021392 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a26dc6-74d7-4850-9af2-2e136ce1a480" containerName="extract" Nov 28 17:18:03 crc kubenswrapper[5024]: I1128 17:18:03.021399 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a26dc6-74d7-4850-9af2-2e136ce1a480" containerName="extract" Nov 28 17:18:03 crc kubenswrapper[5024]: I1128 17:18:03.021633 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a26dc6-74d7-4850-9af2-2e136ce1a480" containerName="extract" Nov 28 17:18:03 crc kubenswrapper[5024]: I1128 17:18:03.022415 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-jcxf2" Nov 28 17:18:03 crc kubenswrapper[5024]: I1128 17:18:03.028272 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-r46l6" Nov 28 17:18:03 crc kubenswrapper[5024]: I1128 17:18:03.050641 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-96cfcb97f-jcxf2"] Nov 28 17:18:03 crc kubenswrapper[5024]: I1128 17:18:03.087147 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcm62\" (UniqueName: \"kubernetes.io/projected/a1ca8cb5-5428-42b6-a72a-332ee1851a88-kube-api-access-wcm62\") pod \"openstack-operator-controller-operator-96cfcb97f-jcxf2\" (UID: \"a1ca8cb5-5428-42b6-a72a-332ee1851a88\") " pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-jcxf2" Nov 28 17:18:03 crc kubenswrapper[5024]: I1128 17:18:03.189122 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcm62\" (UniqueName: \"kubernetes.io/projected/a1ca8cb5-5428-42b6-a72a-332ee1851a88-kube-api-access-wcm62\") pod \"openstack-operator-controller-operator-96cfcb97f-jcxf2\" (UID: \"a1ca8cb5-5428-42b6-a72a-332ee1851a88\") " pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-jcxf2" Nov 28 17:18:03 crc kubenswrapper[5024]: I1128 17:18:03.208592 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcm62\" (UniqueName: \"kubernetes.io/projected/a1ca8cb5-5428-42b6-a72a-332ee1851a88-kube-api-access-wcm62\") pod \"openstack-operator-controller-operator-96cfcb97f-jcxf2\" (UID: \"a1ca8cb5-5428-42b6-a72a-332ee1851a88\") " pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-jcxf2" Nov 28 17:18:03 crc kubenswrapper[5024]: I1128 17:18:03.350614 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-jcxf2" Nov 28 17:18:03 crc kubenswrapper[5024]: I1128 17:18:03.830948 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-96cfcb97f-jcxf2"] Nov 28 17:18:04 crc kubenswrapper[5024]: I1128 17:18:04.676932 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-jcxf2" event={"ID":"a1ca8cb5-5428-42b6-a72a-332ee1851a88","Type":"ContainerStarted","Data":"92bfba9e9ee14e0bbf77ac8a3b194a38a892f0787d334a42c265ecebcd4fdd6c"} Nov 28 17:18:07 crc kubenswrapper[5024]: I1128 17:18:07.564959 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:18:07 crc kubenswrapper[5024]: I1128 17:18:07.565670 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:18:07 crc kubenswrapper[5024]: I1128 17:18:07.565733 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 17:18:07 crc kubenswrapper[5024]: I1128 17:18:07.566732 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d8f6a9c6d8434b82d8868ca2c29dd5353de86fc7a1c9949e65b4d17fd395785"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:18:07 crc kubenswrapper[5024]: I1128 17:18:07.566808 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://7d8f6a9c6d8434b82d8868ca2c29dd5353de86fc7a1c9949e65b4d17fd395785" gracePeriod=600 Nov 28 17:18:07 crc kubenswrapper[5024]: I1128 17:18:07.706866 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="7d8f6a9c6d8434b82d8868ca2c29dd5353de86fc7a1c9949e65b4d17fd395785" exitCode=0 Nov 28 17:18:07 crc kubenswrapper[5024]: I1128 17:18:07.706909 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"7d8f6a9c6d8434b82d8868ca2c29dd5353de86fc7a1c9949e65b4d17fd395785"} Nov 28 17:18:07 crc kubenswrapper[5024]: I1128 17:18:07.706943 5024 scope.go:117] "RemoveContainer" containerID="88f26a0a596a708c394834d35e939b4bff9c97e9c07da03ec569d30bef11bf70" Nov 28 17:18:08 crc kubenswrapper[5024]: I1128 17:18:08.715685 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-jcxf2" event={"ID":"a1ca8cb5-5428-42b6-a72a-332ee1851a88","Type":"ContainerStarted","Data":"f93422c5c48a31cb896ef2bb71ec6a76584d24ad1e7bb6d7f2e516c2c99e0ec7"} Nov 28 17:18:08 crc kubenswrapper[5024]: I1128 17:18:08.716354 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-jcxf2" Nov 28 17:18:08 crc kubenswrapper[5024]: I1128 17:18:08.718758 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"c14bd832feb4db8425d0f1a45e06a6d0b13d8ee68a565113d9375a7e774e72b0"} Nov 28 17:18:08 crc kubenswrapper[5024]: I1128 17:18:08.747139 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-jcxf2" podStartSLOduration=2.7134664490000002 podStartE2EDuration="6.747122524s" podCreationTimestamp="2025-11-28 17:18:02 +0000 UTC" firstStartedPulling="2025-11-28 17:18:03.851144874 +0000 UTC m=+1185.900065779" lastFinishedPulling="2025-11-28 17:18:07.884800949 +0000 UTC m=+1189.933721854" observedRunningTime="2025-11-28 17:18:08.740625719 +0000 UTC m=+1190.789546644" watchObservedRunningTime="2025-11-28 17:18:08.747122524 +0000 UTC m=+1190.796043429" Nov 28 17:18:13 crc kubenswrapper[5024]: I1128 17:18:13.353757 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-jcxf2" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.633750 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-v2mb6"] Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.636229 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-v2mb6" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.638245 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-h8nvw" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.712913 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m"] Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.715163 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.721223 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dkgl6" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.731915 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-v2mb6"] Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.750616 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhxb5\" (UniqueName: \"kubernetes.io/projected/433b0a08-3f38-4113-bab1-49eb5f2e0009-kube-api-access-nhxb5\") pod \"cinder-operator-controller-manager-859b6ccc6-v2mb6\" (UID: \"433b0a08-3f38-4113-bab1-49eb5f2e0009\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-v2mb6" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.759328 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv"] Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.761146 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.765279 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m"] Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.765709 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jtl6s" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.829091 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8"] Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.866076 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.877836 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-8xfp4" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.884782 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zcrz\" (UniqueName: \"kubernetes.io/projected/8f617e42-6f3a-45cd-86c7-58b571a13c00-kube-api-access-4zcrz\") pod \"designate-operator-controller-manager-78b4bc895b-mvhfv\" (UID: \"8f617e42-6f3a-45cd-86c7-58b571a13c00\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.886445 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhxb5\" (UniqueName: \"kubernetes.io/projected/433b0a08-3f38-4113-bab1-49eb5f2e0009-kube-api-access-nhxb5\") pod \"cinder-operator-controller-manager-859b6ccc6-v2mb6\" (UID: \"433b0a08-3f38-4113-bab1-49eb5f2e0009\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-v2mb6" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.886561 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lddrx\" (UniqueName: \"kubernetes.io/projected/306b6495-72ef-41db-8bb8-7e3c7f4105f1-kube-api-access-lddrx\") pod \"barbican-operator-controller-manager-7d9dfd778-b7b9m\" (UID: \"306b6495-72ef-41db-8bb8-7e3c7f4105f1\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.895821 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv"] Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.943082 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754"] Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.944966 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.950809 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-pczxz" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.954920 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhxb5\" (UniqueName: \"kubernetes.io/projected/433b0a08-3f38-4113-bab1-49eb5f2e0009-kube-api-access-nhxb5\") pod \"cinder-operator-controller-manager-859b6ccc6-v2mb6\" (UID: \"433b0a08-3f38-4113-bab1-49eb5f2e0009\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-v2mb6" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.967597 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8"] Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.990618 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lddrx\" (UniqueName: \"kubernetes.io/projected/306b6495-72ef-41db-8bb8-7e3c7f4105f1-kube-api-access-lddrx\") pod \"barbican-operator-controller-manager-7d9dfd778-b7b9m\" (UID: \"306b6495-72ef-41db-8bb8-7e3c7f4105f1\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m" Nov 28 17:18:32 crc kubenswrapper[5024]: I1128 17:18:32.990678 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h7kq\" (UniqueName: \"kubernetes.io/projected/c242c002-7db6-4753-9e37-8b61faa233e7-kube-api-access-4h7kq\") pod \"glance-operator-controller-manager-668d9c48b9-5vxc8\" (UID: \"c242c002-7db6-4753-9e37-8b61faa233e7\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:32.990755 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zcrz\" (UniqueName: \"kubernetes.io/projected/8f617e42-6f3a-45cd-86c7-58b571a13c00-kube-api-access-4zcrz\") pod \"designate-operator-controller-manager-78b4bc895b-mvhfv\" (UID: \"8f617e42-6f3a-45cd-86c7-58b571a13c00\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.004392 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.008807 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-v2mb6" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.028004 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.029803 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lddrx\" (UniqueName: \"kubernetes.io/projected/306b6495-72ef-41db-8bb8-7e3c7f4105f1-kube-api-access-lddrx\") pod \"barbican-operator-controller-manager-7d9dfd778-b7b9m\" (UID: \"306b6495-72ef-41db-8bb8-7e3c7f4105f1\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.036610 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zcrz\" (UniqueName: \"kubernetes.io/projected/8f617e42-6f3a-45cd-86c7-58b571a13c00-kube-api-access-4zcrz\") pod \"designate-operator-controller-manager-78b4bc895b-mvhfv\" (UID: \"8f617e42-6f3a-45cd-86c7-58b571a13c00\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.061543 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.067547 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.071835 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rljvn" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.072531 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.090677 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.092776 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.102826 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zjn86" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.103046 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.115831 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.118308 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.120177 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.120845 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-r4jzm" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.131498 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h7kq\" (UniqueName: \"kubernetes.io/projected/c242c002-7db6-4753-9e37-8b61faa233e7-kube-api-access-4h7kq\") pod \"glance-operator-controller-manager-668d9c48b9-5vxc8\" (UID: \"c242c002-7db6-4753-9e37-8b61faa233e7\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.132292 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm88z\" (UniqueName: \"kubernetes.io/projected/7b427f08-8eba-4f54-ad75-6cf94b532537-kube-api-access-vm88z\") pod \"heat-operator-controller-manager-5f64f6f8bb-vk754\" (UID: \"7b427f08-8eba-4f54-ad75-6cf94b532537\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.152769 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h7kq\" (UniqueName: \"kubernetes.io/projected/c242c002-7db6-4753-9e37-8b61faa233e7-kube-api-access-4h7kq\") pod \"glance-operator-controller-manager-668d9c48b9-5vxc8\" (UID: \"c242c002-7db6-4753-9e37-8b61faa233e7\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.179930 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.210552 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.225367 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.227171 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.231071 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jfl9l" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.235290 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d69n\" (UniqueName: \"kubernetes.io/projected/dd8097de-552e-414a-98d1-314930b2d45b-kube-api-access-7d69n\") pod \"horizon-operator-controller-manager-68c6d99b8f-htnxm\" (UID: \"dd8097de-552e-414a-98d1-314930b2d45b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.235394 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gznh9\" (UniqueName: \"kubernetes.io/projected/0c2c7e62-d724-45fa-8058-085b951992fc-kube-api-access-gznh9\") pod \"ironic-operator-controller-manager-6c548fd776-6wjhl\" (UID: \"0c2c7e62-d724-45fa-8058-085b951992fc\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.235411 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert\") pod \"infra-operator-controller-manager-57548d458d-nxs7s\" (UID: \"7178ca93-de7b-4c2b-8235-41c6dbd4b1a1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.235465 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm88z\" (UniqueName: \"kubernetes.io/projected/7b427f08-8eba-4f54-ad75-6cf94b532537-kube-api-access-vm88z\") pod \"heat-operator-controller-manager-5f64f6f8bb-vk754\" (UID: \"7b427f08-8eba-4f54-ad75-6cf94b532537\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.235498 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcr5p\" (UniqueName: \"kubernetes.io/projected/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-kube-api-access-dcr5p\") pod \"infra-operator-controller-manager-57548d458d-nxs7s\" (UID: \"7178ca93-de7b-4c2b-8235-41c6dbd4b1a1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.271694 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm88z\" (UniqueName: \"kubernetes.io/projected/7b427f08-8eba-4f54-ad75-6cf94b532537-kube-api-access-vm88z\") pod \"heat-operator-controller-manager-5f64f6f8bb-vk754\" (UID: \"7b427f08-8eba-4f54-ad75-6cf94b532537\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.294182 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.322745 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.340335 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gznh9\" (UniqueName: \"kubernetes.io/projected/0c2c7e62-d724-45fa-8058-085b951992fc-kube-api-access-gznh9\") pod \"ironic-operator-controller-manager-6c548fd776-6wjhl\" (UID: \"0c2c7e62-d724-45fa-8058-085b951992fc\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.340384 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert\") pod \"infra-operator-controller-manager-57548d458d-nxs7s\" (UID: \"7178ca93-de7b-4c2b-8235-41c6dbd4b1a1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.340483 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcr5p\" (UniqueName: \"kubernetes.io/projected/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-kube-api-access-dcr5p\") pod \"infra-operator-controller-manager-57548d458d-nxs7s\" (UID: \"7178ca93-de7b-4c2b-8235-41c6dbd4b1a1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.340518 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh64d\" (UniqueName: \"kubernetes.io/projected/c19bfd5c-ac24-41e8-95d0-1c0b6661032d-kube-api-access-sh64d\") pod \"keystone-operator-controller-manager-546d4bdf48-k8qw6\" (UID: \"c19bfd5c-ac24-41e8-95d0-1c0b6661032d\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.340577 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d69n\" (UniqueName: \"kubernetes.io/projected/dd8097de-552e-414a-98d1-314930b2d45b-kube-api-access-7d69n\") pod \"horizon-operator-controller-manager-68c6d99b8f-htnxm\" (UID: \"dd8097de-552e-414a-98d1-314930b2d45b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm" Nov 28 17:18:33 crc kubenswrapper[5024]: E1128 17:18:33.341268 5024 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 17:18:33 crc kubenswrapper[5024]: E1128 17:18:33.341321 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert podName:7178ca93-de7b-4c2b-8235-41c6dbd4b1a1 nodeName:}" failed. No retries permitted until 2025-11-28 17:18:33.841299537 +0000 UTC m=+1215.890220442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert") pod "infra-operator-controller-manager-57548d458d-nxs7s" (UID: "7178ca93-de7b-4c2b-8235-41c6dbd4b1a1") : secret "infra-operator-webhook-server-cert" not found Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.351131 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.352797 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.356409 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-ln8h6" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.387643 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d69n\" (UniqueName: \"kubernetes.io/projected/dd8097de-552e-414a-98d1-314930b2d45b-kube-api-access-7d69n\") pod \"horizon-operator-controller-manager-68c6d99b8f-htnxm\" (UID: \"dd8097de-552e-414a-98d1-314930b2d45b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.391878 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcr5p\" (UniqueName: \"kubernetes.io/projected/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-kube-api-access-dcr5p\") pod \"infra-operator-controller-manager-57548d458d-nxs7s\" (UID: \"7178ca93-de7b-4c2b-8235-41c6dbd4b1a1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.406574 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gznh9\" (UniqueName: \"kubernetes.io/projected/0c2c7e62-d724-45fa-8058-085b951992fc-kube-api-access-gznh9\") pod \"ironic-operator-controller-manager-6c548fd776-6wjhl\" (UID: \"0c2c7e62-d724-45fa-8058-085b951992fc\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.415423 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.418670 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.431037 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-g62l7" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.443510 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.447112 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh64d\" (UniqueName: \"kubernetes.io/projected/c19bfd5c-ac24-41e8-95d0-1c0b6661032d-kube-api-access-sh64d\") pod \"keystone-operator-controller-manager-546d4bdf48-k8qw6\" (UID: \"c19bfd5c-ac24-41e8-95d0-1c0b6661032d\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.447324 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f222j\" (UniqueName: \"kubernetes.io/projected/f3789406-9551-4b4e-9145-86152566a0f8-kube-api-access-f222j\") pod \"manila-operator-controller-manager-6546668bfd-xb9dw\" (UID: \"f3789406-9551-4b4e-9145-86152566a0f8\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.464988 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.484891 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh64d\" (UniqueName: \"kubernetes.io/projected/c19bfd5c-ac24-41e8-95d0-1c0b6661032d-kube-api-access-sh64d\") pod \"keystone-operator-controller-manager-546d4bdf48-k8qw6\" (UID: \"c19bfd5c-ac24-41e8-95d0-1c0b6661032d\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.487073 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.488662 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.494972 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-s5tzd" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.513526 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.525932 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.527598 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.528742 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.533099 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-h9tx6" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.549370 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f222j\" (UniqueName: \"kubernetes.io/projected/f3789406-9551-4b4e-9145-86152566a0f8-kube-api-access-f222j\") pod \"manila-operator-controller-manager-6546668bfd-xb9dw\" (UID: \"f3789406-9551-4b4e-9145-86152566a0f8\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.549552 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6pl\" (UniqueName: \"kubernetes.io/projected/14970290-c7f7-4b41-9238-1c4127416b42-kube-api-access-wm6pl\") pod \"mariadb-operator-controller-manager-56bbcc9d85-nwtnw\" (UID: \"14970290-c7f7-4b41-9238-1c4127416b42\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.568117 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.571652 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f222j\" (UniqueName: \"kubernetes.io/projected/f3789406-9551-4b4e-9145-86152566a0f8-kube-api-access-f222j\") pod \"manila-operator-controller-manager-6546668bfd-xb9dw\" (UID: \"f3789406-9551-4b4e-9145-86152566a0f8\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.575111 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.576687 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-98vj7"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.578290 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-98vj7" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.581170 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-6fk8s" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.587986 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.591424 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.599033 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.601212 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.608809 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-9gxzq" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.624754 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-98vj7"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.636205 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.641206 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.646141 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wjxjm" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.652953 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wlvz\" (UniqueName: \"kubernetes.io/projected/cdc496b3-475b-4a1a-8426-c5f470030d20-kube-api-access-8wlvz\") pod \"nova-operator-controller-manager-697bc559fc-tqqp8\" (UID: \"cdc496b3-475b-4a1a-8426-c5f470030d20\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.653046 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndk4n\" (UniqueName: \"kubernetes.io/projected/3052f534-e5d3-4ac8-8865-8a6de75dc6a2-kube-api-access-ndk4n\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-t8wwx\" (UID: \"3052f534-e5d3-4ac8-8865-8a6de75dc6a2\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.653177 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm6pl\" (UniqueName: \"kubernetes.io/projected/14970290-c7f7-4b41-9238-1c4127416b42-kube-api-access-wm6pl\") pod \"mariadb-operator-controller-manager-56bbcc9d85-nwtnw\" (UID: \"14970290-c7f7-4b41-9238-1c4127416b42\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.661796 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.673456 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.698903 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.703629 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.703960 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm6pl\" (UniqueName: \"kubernetes.io/projected/14970290-c7f7-4b41-9238-1c4127416b42-kube-api-access-wm6pl\") pod \"mariadb-operator-controller-manager-56bbcc9d85-nwtnw\" (UID: \"14970290-c7f7-4b41-9238-1c4127416b42\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.720054 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.722080 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.724293 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-74cp6" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.755622 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndk4n\" (UniqueName: \"kubernetes.io/projected/3052f534-e5d3-4ac8-8865-8a6de75dc6a2-kube-api-access-ndk4n\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-t8wwx\" (UID: \"3052f534-e5d3-4ac8-8865-8a6de75dc6a2\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.755967 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq966\" (UniqueName: \"kubernetes.io/projected/ec29f6e1-030b-4bce-a179-102ef4038e17-kube-api-access-mq966\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr\" (UID: \"ec29f6e1-030b-4bce-a179-102ef4038e17\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.756168 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr\" (UID: \"ec29f6e1-030b-4bce-a179-102ef4038e17\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.756304 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2m8\" (UniqueName: \"kubernetes.io/projected/6634c4c8-389e-4b40-bc1b-c21e833569cd-kube-api-access-dq2m8\") pod \"octavia-operator-controller-manager-998648c74-98vj7\" (UID: \"6634c4c8-389e-4b40-bc1b-c21e833569cd\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-98vj7" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.756552 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnnnw\" (UniqueName: \"kubernetes.io/projected/fd737aa9-6973-41a6-8b79-03d85540253c-kube-api-access-pnnnw\") pod \"ovn-operator-controller-manager-b6456fdb6-gdvrn\" (UID: \"fd737aa9-6973-41a6-8b79-03d85540253c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.756682 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wlvz\" (UniqueName: \"kubernetes.io/projected/cdc496b3-475b-4a1a-8426-c5f470030d20-kube-api-access-8wlvz\") pod \"nova-operator-controller-manager-697bc559fc-tqqp8\" (UID: \"cdc496b3-475b-4a1a-8426-c5f470030d20\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.760237 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.774439 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndk4n\" (UniqueName: \"kubernetes.io/projected/3052f534-e5d3-4ac8-8865-8a6de75dc6a2-kube-api-access-ndk4n\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-t8wwx\" (UID: \"3052f534-e5d3-4ac8-8865-8a6de75dc6a2\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.776454 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-27b8t"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.778412 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.784523 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wlvz\" (UniqueName: \"kubernetes.io/projected/cdc496b3-475b-4a1a-8426-c5f470030d20-kube-api-access-8wlvz\") pod \"nova-operator-controller-manager-697bc559fc-tqqp8\" (UID: \"cdc496b3-475b-4a1a-8426-c5f470030d20\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.786952 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-27b8t" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.800724 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zgm68" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.808107 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.809905 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.815292 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-z6hs9" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.819150 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-27b8t"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.829616 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.862303 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnnnw\" (UniqueName: \"kubernetes.io/projected/fd737aa9-6973-41a6-8b79-03d85540253c-kube-api-access-pnnnw\") pod \"ovn-operator-controller-manager-b6456fdb6-gdvrn\" (UID: \"fd737aa9-6973-41a6-8b79-03d85540253c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.862735 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq966\" (UniqueName: \"kubernetes.io/projected/ec29f6e1-030b-4bce-a179-102ef4038e17-kube-api-access-mq966\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr\" (UID: \"ec29f6e1-030b-4bce-a179-102ef4038e17\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.862806 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr\" (UID: \"ec29f6e1-030b-4bce-a179-102ef4038e17\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.862838 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq2m8\" (UniqueName: \"kubernetes.io/projected/6634c4c8-389e-4b40-bc1b-c21e833569cd-kube-api-access-dq2m8\") pod \"octavia-operator-controller-manager-998648c74-98vj7\" (UID: \"6634c4c8-389e-4b40-bc1b-c21e833569cd\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-98vj7" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.862900 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qpml\" (UniqueName: \"kubernetes.io/projected/f9991185-b617-4567-b70f-4adf629d5aab-kube-api-access-2qpml\") pod \"placement-operator-controller-manager-78f8948974-hrbx6\" (UID: \"f9991185-b617-4567-b70f-4adf629d5aab\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.862954 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert\") pod \"infra-operator-controller-manager-57548d458d-nxs7s\" (UID: \"7178ca93-de7b-4c2b-8235-41c6dbd4b1a1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:18:33 crc kubenswrapper[5024]: E1128 17:18:33.863122 5024 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 17:18:33 crc kubenswrapper[5024]: E1128 17:18:33.863166 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert podName:7178ca93-de7b-4c2b-8235-41c6dbd4b1a1 nodeName:}" failed. No retries permitted until 2025-11-28 17:18:34.863151674 +0000 UTC m=+1216.912072579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert") pod "infra-operator-controller-manager-57548d458d-nxs7s" (UID: "7178ca93-de7b-4c2b-8235-41c6dbd4b1a1") : secret "infra-operator-webhook-server-cert" not found Nov 28 17:18:33 crc kubenswrapper[5024]: E1128 17:18:33.863913 5024 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:18:33 crc kubenswrapper[5024]: E1128 17:18:33.863942 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert podName:ec29f6e1-030b-4bce-a179-102ef4038e17 nodeName:}" failed. No retries permitted until 2025-11-28 17:18:34.363933365 +0000 UTC m=+1216.412854270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" (UID: "ec29f6e1-030b-4bce-a179-102ef4038e17") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.871533 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-skq8p"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.871706 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.873437 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.875125 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xqn5t" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.881147 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnnnw\" (UniqueName: \"kubernetes.io/projected/fd737aa9-6973-41a6-8b79-03d85540253c-kube-api-access-pnnnw\") pod \"ovn-operator-controller-manager-b6456fdb6-gdvrn\" (UID: \"fd737aa9-6973-41a6-8b79-03d85540253c\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.891674 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-skq8p"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.905641 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.918883 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq2m8\" (UniqueName: \"kubernetes.io/projected/6634c4c8-389e-4b40-bc1b-c21e833569cd-kube-api-access-dq2m8\") pod \"octavia-operator-controller-manager-998648c74-98vj7\" (UID: \"6634c4c8-389e-4b40-bc1b-c21e833569cd\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-98vj7" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.925989 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-98vj7" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.926378 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-9zx4m"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.931940 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-9zx4m" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.935683 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-m44zz" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.940356 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.941397 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq966\" (UniqueName: \"kubernetes.io/projected/ec29f6e1-030b-4bce-a179-102ef4038e17-kube-api-access-mq966\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr\" (UID: \"ec29f6e1-030b-4bce-a179-102ef4038e17\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.944596 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-9zx4m"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.966531 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qpml\" (UniqueName: \"kubernetes.io/projected/f9991185-b617-4567-b70f-4adf629d5aab-kube-api-access-2qpml\") pod \"placement-operator-controller-manager-78f8948974-hrbx6\" (UID: \"f9991185-b617-4567-b70f-4adf629d5aab\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.966622 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svjcl\" (UniqueName: \"kubernetes.io/projected/09ca01b9-ef1e-443d-90af-101d476cbcb5-kube-api-access-svjcl\") pod \"test-operator-controller-manager-5854674fcc-skq8p\" (UID: \"09ca01b9-ef1e-443d-90af-101d476cbcb5\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.966743 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j567t\" (UniqueName: \"kubernetes.io/projected/c98df7f0-4e94-48f8-9ef1-2148b7909e24-kube-api-access-j567t\") pod \"swift-operator-controller-manager-5f8c65bbfc-27b8t\" (UID: \"c98df7f0-4e94-48f8-9ef1-2148b7909e24\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-27b8t" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.966791 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tkxk\" (UniqueName: \"kubernetes.io/projected/7bfcb463-0064-4758-bbe8-70b0afd2b3bd-kube-api-access-6tkxk\") pod \"telemetry-operator-controller-manager-6b5d64d475-v8bhk\" (UID: \"7bfcb463-0064-4758-bbe8-70b0afd2b3bd\") " pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.973330 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk"] Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.974631 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.976575 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-tbgfj" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.977583 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.978087 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m" event={"ID":"306b6495-72ef-41db-8bb8-7e3c7f4105f1","Type":"ContainerStarted","Data":"a752c5c3655521bc0a3cccff9f3803eeeb2e0239f30f190c0c4be81f364245f6"} Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.979966 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 28 17:18:33 crc kubenswrapper[5024]: I1128 17:18:33.982331 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk"] Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.001653 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qpml\" (UniqueName: \"kubernetes.io/projected/f9991185-b617-4567-b70f-4adf629d5aab-kube-api-access-2qpml\") pod \"placement-operator-controller-manager-78f8948974-hrbx6\" (UID: \"f9991185-b617-4567-b70f-4adf629d5aab\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.015401 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phvrw"] Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.016667 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phvrw" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.019007 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-rxnj7" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.022781 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phvrw"] Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.068682 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjd7h\" (UniqueName: \"kubernetes.io/projected/3d3cfd45-e574-415e-87a6-2fab660d955a-kube-api-access-pjd7h\") pod \"watcher-operator-controller-manager-769dc69bc-9zx4m\" (UID: \"3d3cfd45-e574-415e-87a6-2fab660d955a\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-9zx4m" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.068774 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.068837 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j567t\" (UniqueName: \"kubernetes.io/projected/c98df7f0-4e94-48f8-9ef1-2148b7909e24-kube-api-access-j567t\") pod \"swift-operator-controller-manager-5f8c65bbfc-27b8t\" (UID: \"c98df7f0-4e94-48f8-9ef1-2148b7909e24\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-27b8t" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.068887 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tkxk\" (UniqueName: \"kubernetes.io/projected/7bfcb463-0064-4758-bbe8-70b0afd2b3bd-kube-api-access-6tkxk\") pod \"telemetry-operator-controller-manager-6b5d64d475-v8bhk\" (UID: \"7bfcb463-0064-4758-bbe8-70b0afd2b3bd\") " pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.068968 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk6ds\" (UniqueName: \"kubernetes.io/projected/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-kube-api-access-xk6ds\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.069097 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.069168 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svjcl\" (UniqueName: \"kubernetes.io/projected/09ca01b9-ef1e-443d-90af-101d476cbcb5-kube-api-access-svjcl\") pod \"test-operator-controller-manager-5854674fcc-skq8p\" (UID: \"09ca01b9-ef1e-443d-90af-101d476cbcb5\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.076569 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.087701 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m"] Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.090603 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tkxk\" (UniqueName: \"kubernetes.io/projected/7bfcb463-0064-4758-bbe8-70b0afd2b3bd-kube-api-access-6tkxk\") pod \"telemetry-operator-controller-manager-6b5d64d475-v8bhk\" (UID: \"7bfcb463-0064-4758-bbe8-70b0afd2b3bd\") " pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.099327 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j567t\" (UniqueName: \"kubernetes.io/projected/c98df7f0-4e94-48f8-9ef1-2148b7909e24-kube-api-access-j567t\") pod \"swift-operator-controller-manager-5f8c65bbfc-27b8t\" (UID: \"c98df7f0-4e94-48f8-9ef1-2148b7909e24\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-27b8t" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.099479 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svjcl\" (UniqueName: \"kubernetes.io/projected/09ca01b9-ef1e-443d-90af-101d476cbcb5-kube-api-access-svjcl\") pod \"test-operator-controller-manager-5854674fcc-skq8p\" (UID: \"09ca01b9-ef1e-443d-90af-101d476cbcb5\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.152476 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-27b8t" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.179428 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv"] Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.197080 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.197449 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk6ds\" (UniqueName: \"kubernetes.io/projected/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-kube-api-access-xk6ds\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.197538 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.197593 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w77mn\" (UniqueName: \"kubernetes.io/projected/c8d40417-67d5-4a1c-ab22-1f2afd6f1ff2-kube-api-access-w77mn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-phvrw\" (UID: \"c8d40417-67d5-4a1c-ab22-1f2afd6f1ff2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phvrw" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.197761 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjd7h\" (UniqueName: \"kubernetes.io/projected/3d3cfd45-e574-415e-87a6-2fab660d955a-kube-api-access-pjd7h\") pod \"watcher-operator-controller-manager-769dc69bc-9zx4m\" (UID: \"3d3cfd45-e574-415e-87a6-2fab660d955a\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-9zx4m" Nov 28 17:18:34 crc kubenswrapper[5024]: E1128 17:18:34.197949 5024 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 17:18:34 crc kubenswrapper[5024]: E1128 17:18:34.198034 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs podName:e3a51773-e3f0-4e2f-b53c-8eede799ef4b nodeName:}" failed. No retries permitted until 2025-11-28 17:18:34.698002331 +0000 UTC m=+1216.746923236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs") pod "openstack-operator-controller-manager-668879d68f-zgrkk" (UID: "e3a51773-e3f0-4e2f-b53c-8eede799ef4b") : secret "webhook-server-cert" not found Nov 28 17:18:34 crc kubenswrapper[5024]: E1128 17:18:34.198396 5024 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 17:18:34 crc kubenswrapper[5024]: E1128 17:18:34.198906 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs podName:e3a51773-e3f0-4e2f-b53c-8eede799ef4b nodeName:}" failed. No retries permitted until 2025-11-28 17:18:34.698887095 +0000 UTC m=+1216.747808000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs") pod "openstack-operator-controller-manager-668879d68f-zgrkk" (UID: "e3a51773-e3f0-4e2f-b53c-8eede799ef4b") : secret "metrics-server-cert" not found Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.228000 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.231839 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjd7h\" (UniqueName: \"kubernetes.io/projected/3d3cfd45-e574-415e-87a6-2fab660d955a-kube-api-access-pjd7h\") pod \"watcher-operator-controller-manager-769dc69bc-9zx4m\" (UID: \"3d3cfd45-e574-415e-87a6-2fab660d955a\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-9zx4m" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.238068 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk6ds\" (UniqueName: \"kubernetes.io/projected/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-kube-api-access-xk6ds\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.239821 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.246492 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-v2mb6"] Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.260995 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-9zx4m" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.306997 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w77mn\" (UniqueName: \"kubernetes.io/projected/c8d40417-67d5-4a1c-ab22-1f2afd6f1ff2-kube-api-access-w77mn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-phvrw\" (UID: \"c8d40417-67d5-4a1c-ab22-1f2afd6f1ff2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phvrw" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.331848 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w77mn\" (UniqueName: \"kubernetes.io/projected/c8d40417-67d5-4a1c-ab22-1f2afd6f1ff2-kube-api-access-w77mn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-phvrw\" (UID: \"c8d40417-67d5-4a1c-ab22-1f2afd6f1ff2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phvrw" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.335648 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8"] Nov 28 17:18:34 crc kubenswrapper[5024]: W1128 17:18:34.356000 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc242c002_7db6_4753_9e37_8b61faa233e7.slice/crio-734a5f01a52cc3e90bef1ebaf8d7567c50135eeab2dde6c7bf458726112d765a WatchSource:0}: Error finding container 734a5f01a52cc3e90bef1ebaf8d7567c50135eeab2dde6c7bf458726112d765a: Status 404 returned error can't find the container with id 734a5f01a52cc3e90bef1ebaf8d7567c50135eeab2dde6c7bf458726112d765a Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.409443 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr\" (UID: \"ec29f6e1-030b-4bce-a179-102ef4038e17\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:18:34 crc kubenswrapper[5024]: E1128 17:18:34.409598 5024 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:18:34 crc kubenswrapper[5024]: E1128 17:18:34.409651 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert podName:ec29f6e1-030b-4bce-a179-102ef4038e17 nodeName:}" failed. No retries permitted until 2025-11-28 17:18:35.409635623 +0000 UTC m=+1217.458556528 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" (UID: "ec29f6e1-030b-4bce-a179-102ef4038e17") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.581417 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phvrw" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.716611 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.716746 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:34 crc kubenswrapper[5024]: E1128 17:18:34.716873 5024 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 17:18:34 crc kubenswrapper[5024]: E1128 17:18:34.716933 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs podName:e3a51773-e3f0-4e2f-b53c-8eede799ef4b nodeName:}" failed. No retries permitted until 2025-11-28 17:18:35.716908878 +0000 UTC m=+1217.765829783 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs") pod "openstack-operator-controller-manager-668879d68f-zgrkk" (UID: "e3a51773-e3f0-4e2f-b53c-8eede799ef4b") : secret "webhook-server-cert" not found Nov 28 17:18:34 crc kubenswrapper[5024]: E1128 17:18:34.717201 5024 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 17:18:34 crc kubenswrapper[5024]: E1128 17:18:34.717282 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs podName:e3a51773-e3f0-4e2f-b53c-8eede799ef4b nodeName:}" failed. No retries permitted until 2025-11-28 17:18:35.717270968 +0000 UTC m=+1217.766191873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs") pod "openstack-operator-controller-manager-668879d68f-zgrkk" (UID: "e3a51773-e3f0-4e2f-b53c-8eede799ef4b") : secret "metrics-server-cert" not found Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.807624 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl"] Nov 28 17:18:34 crc kubenswrapper[5024]: W1128 17:18:34.819375 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc19bfd5c_ac24_41e8_95d0_1c0b6661032d.slice/crio-f277eb562e96c218a947399c234a427a67610ceea9c8f3ad475d5f3bbc83cc04 WatchSource:0}: Error finding container f277eb562e96c218a947399c234a427a67610ceea9c8f3ad475d5f3bbc83cc04: Status 404 returned error can't find the container with id f277eb562e96c218a947399c234a427a67610ceea9c8f3ad475d5f3bbc83cc04 Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.820410 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw"] Nov 28 17:18:34 crc kubenswrapper[5024]: W1128 17:18:34.827909 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3789406_9551_4b4e_9145_86152566a0f8.slice/crio-c12dc5fdd3584401ba7fde7469b9b47e5e1b6ca0194a697f850435f2205a00ab WatchSource:0}: Error finding container c12dc5fdd3584401ba7fde7469b9b47e5e1b6ca0194a697f850435f2205a00ab: Status 404 returned error can't find the container with id c12dc5fdd3584401ba7fde7469b9b47e5e1b6ca0194a697f850435f2205a00ab Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.828342 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm"] Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.834047 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754"] Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.846028 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6"] Nov 28 17:18:34 crc kubenswrapper[5024]: I1128 17:18:34.921404 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert\") pod \"infra-operator-controller-manager-57548d458d-nxs7s\" (UID: \"7178ca93-de7b-4c2b-8235-41c6dbd4b1a1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:18:34 crc kubenswrapper[5024]: E1128 17:18:34.921602 5024 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 17:18:34 crc kubenswrapper[5024]: E1128 17:18:34.921685 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert podName:7178ca93-de7b-4c2b-8235-41c6dbd4b1a1 nodeName:}" failed. No retries permitted until 2025-11-28 17:18:36.921666806 +0000 UTC m=+1218.970587711 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert") pod "infra-operator-controller-manager-57548d458d-nxs7s" (UID: "7178ca93-de7b-4c2b-8235-41c6dbd4b1a1") : secret "infra-operator-webhook-server-cert" not found Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.001771 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw" event={"ID":"f3789406-9551-4b4e-9145-86152566a0f8","Type":"ContainerStarted","Data":"c12dc5fdd3584401ba7fde7469b9b47e5e1b6ca0194a697f850435f2205a00ab"} Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.003280 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl" event={"ID":"0c2c7e62-d724-45fa-8058-085b951992fc","Type":"ContainerStarted","Data":"55de88ec7b44eff0c877bddd8902649d8032f249f066de5dd09ac844f430b24b"} Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.008485 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8" event={"ID":"c242c002-7db6-4753-9e37-8b61faa233e7","Type":"ContainerStarted","Data":"734a5f01a52cc3e90bef1ebaf8d7567c50135eeab2dde6c7bf458726112d765a"} Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.009889 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-v2mb6" event={"ID":"433b0a08-3f38-4113-bab1-49eb5f2e0009","Type":"ContainerStarted","Data":"585ce2cb8470d3365a2ceb92c612d37ec65afc4985541374d962a73b168ed91f"} Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.012011 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754" event={"ID":"7b427f08-8eba-4f54-ad75-6cf94b532537","Type":"ContainerStarted","Data":"5edc6302446cd5386a8e06c6dfc89ee4a3a9abeddb0a9e4a793a98a3417ff2d9"} Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.013311 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6" event={"ID":"c19bfd5c-ac24-41e8-95d0-1c0b6661032d","Type":"ContainerStarted","Data":"f277eb562e96c218a947399c234a427a67610ceea9c8f3ad475d5f3bbc83cc04"} Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.015262 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm" event={"ID":"dd8097de-552e-414a-98d1-314930b2d45b","Type":"ContainerStarted","Data":"61cdeee4f50a3ba513e570df55b44ecd3c6c828faa16df7e8e80fc45909af8f8"} Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.016471 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv" event={"ID":"8f617e42-6f3a-45cd-86c7-58b571a13c00","Type":"ContainerStarted","Data":"7a92bbcf0c0ba43fe791df494d92b26cd0a2b85d98de347d29e0d4e3d60d6aa6"} Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.190720 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw"] Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.289317 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-98vj7"] Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.325940 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx"] Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.387006 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8"] Nov 28 17:18:35 crc kubenswrapper[5024]: W1128 17:18:35.399799 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdc496b3_475b_4a1a_8426_c5f470030d20.slice/crio-7c212b2ab27592602465e95714587f414d99b4bb435926118e51ffe13b838b15 WatchSource:0}: Error finding container 7c212b2ab27592602465e95714587f414d99b4bb435926118e51ffe13b838b15: Status 404 returned error can't find the container with id 7c212b2ab27592602465e95714587f414d99b4bb435926118e51ffe13b838b15 Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.439687 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr\" (UID: \"ec29f6e1-030b-4bce-a179-102ef4038e17\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:18:35 crc kubenswrapper[5024]: E1128 17:18:35.439910 5024 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:18:35 crc kubenswrapper[5024]: E1128 17:18:35.439969 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert podName:ec29f6e1-030b-4bce-a179-102ef4038e17 nodeName:}" failed. No retries permitted until 2025-11-28 17:18:37.439951527 +0000 UTC m=+1219.488872432 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" (UID: "ec29f6e1-030b-4bce-a179-102ef4038e17") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.757141 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.757608 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:35 crc kubenswrapper[5024]: E1128 17:18:35.757765 5024 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 17:18:35 crc kubenswrapper[5024]: E1128 17:18:35.757818 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs podName:e3a51773-e3f0-4e2f-b53c-8eede799ef4b nodeName:}" failed. No retries permitted until 2025-11-28 17:18:37.757801556 +0000 UTC m=+1219.806722461 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs") pod "openstack-operator-controller-manager-668879d68f-zgrkk" (UID: "e3a51773-e3f0-4e2f-b53c-8eede799ef4b") : secret "webhook-server-cert" not found Nov 28 17:18:35 crc kubenswrapper[5024]: E1128 17:18:35.758181 5024 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 17:18:35 crc kubenswrapper[5024]: E1128 17:18:35.758211 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs podName:e3a51773-e3f0-4e2f-b53c-8eede799ef4b nodeName:}" failed. No retries permitted until 2025-11-28 17:18:37.758202037 +0000 UTC m=+1219.807122942 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs") pod "openstack-operator-controller-manager-668879d68f-zgrkk" (UID: "e3a51773-e3f0-4e2f-b53c-8eede799ef4b") : secret "metrics-server-cert" not found Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.776456 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-9zx4m"] Nov 28 17:18:35 crc kubenswrapper[5024]: W1128 17:18:35.789795 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9991185_b617_4567_b70f_4adf629d5aab.slice/crio-436e6d9fd9d44e5795fae4a62cc0d0d0409ade49f00f99748e78a5a906dec5a6 WatchSource:0}: Error finding container 436e6d9fd9d44e5795fae4a62cc0d0d0409ade49f00f99748e78a5a906dec5a6: Status 404 returned error can't find the container with id 436e6d9fd9d44e5795fae4a62cc0d0d0409ade49f00f99748e78a5a906dec5a6 Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.791491 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6"] Nov 28 17:18:35 crc kubenswrapper[5024]: W1128 17:18:35.798494 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd737aa9_6973_41a6_8b79_03d85540253c.slice/crio-4a084ec115d072d8062733ffce8fbb6050f924f114af54e40a60acd7c2596be7 WatchSource:0}: Error finding container 4a084ec115d072d8062733ffce8fbb6050f924f114af54e40a60acd7c2596be7: Status 404 returned error can't find the container with id 4a084ec115d072d8062733ffce8fbb6050f924f114af54e40a60acd7c2596be7 Nov 28 17:18:35 crc kubenswrapper[5024]: W1128 17:18:35.812732 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc98df7f0_4e94_48f8_9ef1_2148b7909e24.slice/crio-8653e7ea3eeddeb71eeadfb0a3521a60aec347d093289cae79a6437cd8586d1a WatchSource:0}: Error finding container 8653e7ea3eeddeb71eeadfb0a3521a60aec347d093289cae79a6437cd8586d1a: Status 404 returned error can't find the container with id 8653e7ea3eeddeb71eeadfb0a3521a60aec347d093289cae79a6437cd8586d1a Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.813902 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn"] Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.829482 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk"] Nov 28 17:18:35 crc kubenswrapper[5024]: W1128 17:18:35.833828 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d3cfd45_e574_415e_87a6_2fab660d955a.slice/crio-32f7c6144aba7c5112bfc09d4e05050694b15779fdd463eeff436584529f0f40 WatchSource:0}: Error finding container 32f7c6144aba7c5112bfc09d4e05050694b15779fdd463eeff436584529f0f40: Status 404 returned error can't find the container with id 32f7c6144aba7c5112bfc09d4e05050694b15779fdd463eeff436584529f0f40 Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.842623 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-27b8t"] Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.868141 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phvrw"] Nov 28 17:18:35 crc kubenswrapper[5024]: I1128 17:18:35.933033 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-skq8p"] Nov 28 17:18:35 crc kubenswrapper[5024]: E1128 17:18:35.949371 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-svjcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-skq8p_openstack-operators(09ca01b9-ef1e-443d-90af-101d476cbcb5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 17:18:35 crc kubenswrapper[5024]: E1128 17:18:35.951760 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-svjcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-skq8p_openstack-operators(09ca01b9-ef1e-443d-90af-101d476cbcb5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 17:18:35 crc kubenswrapper[5024]: E1128 17:18:35.953203 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" podUID="09ca01b9-ef1e-443d-90af-101d476cbcb5" Nov 28 17:18:36 crc kubenswrapper[5024]: I1128 17:18:36.042444 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-9zx4m" event={"ID":"3d3cfd45-e574-415e-87a6-2fab660d955a","Type":"ContainerStarted","Data":"32f7c6144aba7c5112bfc09d4e05050694b15779fdd463eeff436584529f0f40"} Nov 28 17:18:36 crc kubenswrapper[5024]: I1128 17:18:36.044359 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" event={"ID":"09ca01b9-ef1e-443d-90af-101d476cbcb5","Type":"ContainerStarted","Data":"c2c843d08790247e9aa1001efdabc1a7e552b67cbf94ba3f64fdea8c5ed4c1f5"} Nov 28 17:18:36 crc kubenswrapper[5024]: I1128 17:18:36.046693 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx" event={"ID":"3052f534-e5d3-4ac8-8865-8a6de75dc6a2","Type":"ContainerStarted","Data":"54325179eadb67331b2cfc3e56db3fb09df9a194f29d570aed6472cca330d5e0"} Nov 28 17:18:36 crc kubenswrapper[5024]: E1128 17:18:36.049813 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" podUID="09ca01b9-ef1e-443d-90af-101d476cbcb5" Nov 28 17:18:36 crc kubenswrapper[5024]: I1128 17:18:36.051637 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk" event={"ID":"7bfcb463-0064-4758-bbe8-70b0afd2b3bd","Type":"ContainerStarted","Data":"e6d89b38780ee2c1f561b9cccace6fc2f15d6c5851f2fe62921970398a17ea46"} Nov 28 17:18:36 crc kubenswrapper[5024]: I1128 17:18:36.063561 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn" event={"ID":"fd737aa9-6973-41a6-8b79-03d85540253c","Type":"ContainerStarted","Data":"4a084ec115d072d8062733ffce8fbb6050f924f114af54e40a60acd7c2596be7"} Nov 28 17:18:36 crc kubenswrapper[5024]: I1128 17:18:36.075945 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-27b8t" event={"ID":"c98df7f0-4e94-48f8-9ef1-2148b7909e24","Type":"ContainerStarted","Data":"8653e7ea3eeddeb71eeadfb0a3521a60aec347d093289cae79a6437cd8586d1a"} Nov 28 17:18:36 crc kubenswrapper[5024]: I1128 17:18:36.089406 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6" event={"ID":"f9991185-b617-4567-b70f-4adf629d5aab","Type":"ContainerStarted","Data":"436e6d9fd9d44e5795fae4a62cc0d0d0409ade49f00f99748e78a5a906dec5a6"} Nov 28 17:18:36 crc kubenswrapper[5024]: I1128 17:18:36.101802 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw" event={"ID":"14970290-c7f7-4b41-9238-1c4127416b42","Type":"ContainerStarted","Data":"36e886465fff3567845affb8932e716f16c0ef8a55335fd6bddfb6ea56d5142c"} Nov 28 17:18:36 crc kubenswrapper[5024]: I1128 17:18:36.107935 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8" event={"ID":"cdc496b3-475b-4a1a-8426-c5f470030d20","Type":"ContainerStarted","Data":"7c212b2ab27592602465e95714587f414d99b4bb435926118e51ffe13b838b15"} Nov 28 17:18:36 crc kubenswrapper[5024]: I1128 17:18:36.117349 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phvrw" event={"ID":"c8d40417-67d5-4a1c-ab22-1f2afd6f1ff2","Type":"ContainerStarted","Data":"a320971a9767afdcc861a2687c7537bc29e700eb7033e7cecc909a7544cd84ae"} Nov 28 17:18:36 crc kubenswrapper[5024]: I1128 17:18:36.119612 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-98vj7" event={"ID":"6634c4c8-389e-4b40-bc1b-c21e833569cd","Type":"ContainerStarted","Data":"ef5953baaead98e39d55b9747d9a59d6b534d34ef402c3ff14c3868be7616d31"} Nov 28 17:18:36 crc kubenswrapper[5024]: I1128 17:18:36.942631 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert\") pod \"infra-operator-controller-manager-57548d458d-nxs7s\" (UID: \"7178ca93-de7b-4c2b-8235-41c6dbd4b1a1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:18:36 crc kubenswrapper[5024]: E1128 17:18:36.943229 5024 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 17:18:36 crc kubenswrapper[5024]: E1128 17:18:36.943300 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert podName:7178ca93-de7b-4c2b-8235-41c6dbd4b1a1 nodeName:}" failed. No retries permitted until 2025-11-28 17:18:40.943268122 +0000 UTC m=+1222.992189027 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert") pod "infra-operator-controller-manager-57548d458d-nxs7s" (UID: "7178ca93-de7b-4c2b-8235-41c6dbd4b1a1") : secret "infra-operator-webhook-server-cert" not found Nov 28 17:18:37 crc kubenswrapper[5024]: E1128 17:18:37.165998 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" podUID="09ca01b9-ef1e-443d-90af-101d476cbcb5" Nov 28 17:18:37 crc kubenswrapper[5024]: I1128 17:18:37.461716 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr\" (UID: \"ec29f6e1-030b-4bce-a179-102ef4038e17\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:18:37 crc kubenswrapper[5024]: E1128 17:18:37.461940 5024 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:18:37 crc kubenswrapper[5024]: E1128 17:18:37.462039 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert podName:ec29f6e1-030b-4bce-a179-102ef4038e17 nodeName:}" failed. No retries permitted until 2025-11-28 17:18:41.461998325 +0000 UTC m=+1223.510919230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" (UID: "ec29f6e1-030b-4bce-a179-102ef4038e17") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:18:37 crc kubenswrapper[5024]: I1128 17:18:37.771051 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:37 crc kubenswrapper[5024]: I1128 17:18:37.771246 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:37 crc kubenswrapper[5024]: E1128 17:18:37.771268 5024 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 17:18:37 crc kubenswrapper[5024]: E1128 17:18:37.771349 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs podName:e3a51773-e3f0-4e2f-b53c-8eede799ef4b nodeName:}" failed. No retries permitted until 2025-11-28 17:18:41.771326045 +0000 UTC m=+1223.820246940 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs") pod "openstack-operator-controller-manager-668879d68f-zgrkk" (UID: "e3a51773-e3f0-4e2f-b53c-8eede799ef4b") : secret "metrics-server-cert" not found Nov 28 17:18:37 crc kubenswrapper[5024]: E1128 17:18:37.771492 5024 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 17:18:37 crc kubenswrapper[5024]: E1128 17:18:37.771572 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs podName:e3a51773-e3f0-4e2f-b53c-8eede799ef4b nodeName:}" failed. No retries permitted until 2025-11-28 17:18:41.771555341 +0000 UTC m=+1223.820476246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs") pod "openstack-operator-controller-manager-668879d68f-zgrkk" (UID: "e3a51773-e3f0-4e2f-b53c-8eede799ef4b") : secret "webhook-server-cert" not found Nov 28 17:18:40 crc kubenswrapper[5024]: I1128 17:18:40.959465 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert\") pod \"infra-operator-controller-manager-57548d458d-nxs7s\" (UID: \"7178ca93-de7b-4c2b-8235-41c6dbd4b1a1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:18:40 crc kubenswrapper[5024]: E1128 17:18:40.959677 5024 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 17:18:40 crc kubenswrapper[5024]: E1128 17:18:40.960148 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert podName:7178ca93-de7b-4c2b-8235-41c6dbd4b1a1 nodeName:}" failed. No retries permitted until 2025-11-28 17:18:48.960119406 +0000 UTC m=+1231.009040491 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert") pod "infra-operator-controller-manager-57548d458d-nxs7s" (UID: "7178ca93-de7b-4c2b-8235-41c6dbd4b1a1") : secret "infra-operator-webhook-server-cert" not found Nov 28 17:18:41 crc kubenswrapper[5024]: I1128 17:18:41.468623 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr\" (UID: \"ec29f6e1-030b-4bce-a179-102ef4038e17\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:18:41 crc kubenswrapper[5024]: E1128 17:18:41.468780 5024 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:18:41 crc kubenswrapper[5024]: E1128 17:18:41.469109 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert podName:ec29f6e1-030b-4bce-a179-102ef4038e17 nodeName:}" failed. No retries permitted until 2025-11-28 17:18:49.469085546 +0000 UTC m=+1231.518006451 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" (UID: "ec29f6e1-030b-4bce-a179-102ef4038e17") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:18:41 crc kubenswrapper[5024]: I1128 17:18:41.776443 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:41 crc kubenswrapper[5024]: I1128 17:18:41.776612 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:41 crc kubenswrapper[5024]: E1128 17:18:41.776651 5024 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 17:18:41 crc kubenswrapper[5024]: E1128 17:18:41.776734 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs podName:e3a51773-e3f0-4e2f-b53c-8eede799ef4b nodeName:}" failed. No retries permitted until 2025-11-28 17:18:49.776714571 +0000 UTC m=+1231.825635476 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs") pod "openstack-operator-controller-manager-668879d68f-zgrkk" (UID: "e3a51773-e3f0-4e2f-b53c-8eede799ef4b") : secret "metrics-server-cert" not found Nov 28 17:18:41 crc kubenswrapper[5024]: E1128 17:18:41.776762 5024 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 17:18:41 crc kubenswrapper[5024]: E1128 17:18:41.776814 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs podName:e3a51773-e3f0-4e2f-b53c-8eede799ef4b nodeName:}" failed. No retries permitted until 2025-11-28 17:18:49.776795963 +0000 UTC m=+1231.825716868 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs") pod "openstack-operator-controller-manager-668879d68f-zgrkk" (UID: "e3a51773-e3f0-4e2f-b53c-8eede799ef4b") : secret "webhook-server-cert" not found Nov 28 17:18:42 crc kubenswrapper[5024]: I1128 17:18:42.118255 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-fbhkx" podUID="63ee2602-779a-4f8d-89e8-e741417fcba9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 17:18:49 crc kubenswrapper[5024]: I1128 17:18:49.028289 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert\") pod \"infra-operator-controller-manager-57548d458d-nxs7s\" (UID: \"7178ca93-de7b-4c2b-8235-41c6dbd4b1a1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:18:49 crc kubenswrapper[5024]: I1128 17:18:49.033630 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7178ca93-de7b-4c2b-8235-41c6dbd4b1a1-cert\") pod \"infra-operator-controller-manager-57548d458d-nxs7s\" (UID: \"7178ca93-de7b-4c2b-8235-41c6dbd4b1a1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:18:49 crc kubenswrapper[5024]: I1128 17:18:49.151241 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:18:49 crc kubenswrapper[5024]: I1128 17:18:49.540544 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr\" (UID: \"ec29f6e1-030b-4bce-a179-102ef4038e17\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:18:49 crc kubenswrapper[5024]: E1128 17:18:49.540802 5024 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:18:49 crc kubenswrapper[5024]: E1128 17:18:49.541387 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert podName:ec29f6e1-030b-4bce-a179-102ef4038e17 nodeName:}" failed. No retries permitted until 2025-11-28 17:19:05.541369762 +0000 UTC m=+1247.590290667 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" (UID: "ec29f6e1-030b-4bce-a179-102ef4038e17") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:18:49 crc kubenswrapper[5024]: I1128 17:18:49.847599 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:49 crc kubenswrapper[5024]: I1128 17:18:49.847731 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:18:49 crc kubenswrapper[5024]: E1128 17:18:49.847808 5024 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 17:18:49 crc kubenswrapper[5024]: E1128 17:18:49.847882 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs podName:e3a51773-e3f0-4e2f-b53c-8eede799ef4b nodeName:}" failed. No retries permitted until 2025-11-28 17:19:05.847864386 +0000 UTC m=+1247.896785291 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs") pod "openstack-operator-controller-manager-668879d68f-zgrkk" (UID: "e3a51773-e3f0-4e2f-b53c-8eede799ef4b") : secret "webhook-server-cert" not found Nov 28 17:18:49 crc kubenswrapper[5024]: E1128 17:18:49.848079 5024 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 17:18:49 crc kubenswrapper[5024]: E1128 17:18:49.848178 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs podName:e3a51773-e3f0-4e2f-b53c-8eede799ef4b nodeName:}" failed. No retries permitted until 2025-11-28 17:19:05.848157154 +0000 UTC m=+1247.897078069 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs") pod "openstack-operator-controller-manager-668879d68f-zgrkk" (UID: "e3a51773-e3f0-4e2f-b53c-8eede799ef4b") : secret "metrics-server-cert" not found Nov 28 17:18:51 crc kubenswrapper[5024]: E1128 17:18:51.192826 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" Nov 28 17:18:51 crc kubenswrapper[5024]: E1128 17:18:51.193442 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wm6pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-nwtnw_openstack-operators(14970290-c7f7-4b41-9238-1c4127416b42): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:18:53 crc kubenswrapper[5024]: E1128 17:18:53.426974 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Nov 28 17:18:53 crc kubenswrapper[5024]: E1128 17:18:53.427607 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vm88z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-vk754_openstack-operators(7b427f08-8eba-4f54-ad75-6cf94b532537): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:18:55 crc kubenswrapper[5024]: E1128 17:18:55.334814 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:440cde33d3a2a0c545cd1c110a3634eb85544370f448865b97a13c38034b0172" Nov 28 17:18:55 crc kubenswrapper[5024]: E1128 17:18:55.335330 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:440cde33d3a2a0c545cd1c110a3634eb85544370f448865b97a13c38034b0172,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4h7kq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-668d9c48b9-5vxc8_openstack-operators(c242c002-7db6-4753-9e37-8b61faa233e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:18:57 crc kubenswrapper[5024]: E1128 17:18:57.490785 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Nov 28 17:18:57 crc kubenswrapper[5024]: E1128 17:18:57.492991 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7d69n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-htnxm_openstack-operators(dd8097de-552e-414a-98d1-314930b2d45b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:00 crc kubenswrapper[5024]: E1128 17:19:00.052666 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Nov 28 17:19:00 crc kubenswrapper[5024]: E1128 17:19:00.053240 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lddrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-b7b9m_openstack-operators(306b6495-72ef-41db-8bb8-7e3c7f4105f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:00 crc kubenswrapper[5024]: E1128 17:19:00.557857 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5" Nov 28 17:19:00 crc kubenswrapper[5024]: E1128 17:19:00.558119 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f222j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6546668bfd-xb9dw_openstack-operators(f3789406-9551-4b4e-9145-86152566a0f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:01 crc kubenswrapper[5024]: E1128 17:19:01.221051 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" Nov 28 17:19:01 crc kubenswrapper[5024]: E1128 17:19:01.221621 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4zcrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-mvhfv_openstack-operators(8f617e42-6f3a-45cd-86c7-58b571a13c00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:02 crc kubenswrapper[5024]: E1128 17:19:02.080224 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" Nov 28 17:19:02 crc kubenswrapper[5024]: E1128 17:19:02.080426 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gznh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-6wjhl_openstack-operators(0c2c7e62-d724-45fa-8058-085b951992fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:02 crc kubenswrapper[5024]: E1128 17:19:02.802822 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Nov 28 17:19:02 crc kubenswrapper[5024]: E1128 17:19:02.803727 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dq2m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-98vj7_openstack-operators(6634c4c8-389e-4b40-bc1b-c21e833569cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:05 crc kubenswrapper[5024]: I1128 17:19:05.584329 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr\" (UID: \"ec29f6e1-030b-4bce-a179-102ef4038e17\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:19:05 crc kubenswrapper[5024]: I1128 17:19:05.590792 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec29f6e1-030b-4bce-a179-102ef4038e17-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr\" (UID: \"ec29f6e1-030b-4bce-a179-102ef4038e17\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:19:05 crc kubenswrapper[5024]: I1128 17:19:05.782206 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:19:05 crc kubenswrapper[5024]: E1128 17:19:05.785487 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Nov 28 17:19:05 crc kubenswrapper[5024]: E1128 17:19:05.785705 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ndk4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-t8wwx_openstack-operators(3052f534-e5d3-4ac8-8865-8a6de75dc6a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:05 crc kubenswrapper[5024]: I1128 17:19:05.890738 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:19:05 crc kubenswrapper[5024]: I1128 17:19:05.890873 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:19:05 crc kubenswrapper[5024]: I1128 17:19:05.895348 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:19:05 crc kubenswrapper[5024]: I1128 17:19:05.895357 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3a51773-e3f0-4e2f-b53c-8eede799ef4b-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-zgrkk\" (UID: \"e3a51773-e3f0-4e2f-b53c-8eede799ef4b\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:19:06 crc kubenswrapper[5024]: I1128 17:19:06.068223 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:19:06 crc kubenswrapper[5024]: E1128 17:19:06.380654 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Nov 28 17:19:06 crc kubenswrapper[5024]: E1128 17:19:06.380941 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pnnnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-gdvrn_openstack-operators(fd737aa9-6973-41a6-8b79-03d85540253c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:06 crc kubenswrapper[5024]: E1128 17:19:06.894599 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Nov 28 17:19:06 crc kubenswrapper[5024]: E1128 17:19:06.894851 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2qpml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-hrbx6_openstack-operators(f9991185-b617-4567-b70f-4adf629d5aab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:08 crc kubenswrapper[5024]: E1128 17:19:08.701953 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 28 17:19:08 crc kubenswrapper[5024]: E1128 17:19:08.702618 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w77mn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-phvrw_openstack-operators(c8d40417-67d5-4a1c-ab22-1f2afd6f1ff2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:08 crc kubenswrapper[5024]: E1128 17:19:08.703767 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phvrw" podUID="c8d40417-67d5-4a1c-ab22-1f2afd6f1ff2" Nov 28 17:19:09 crc kubenswrapper[5024]: E1128 17:19:09.458135 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phvrw" podUID="c8d40417-67d5-4a1c-ab22-1f2afd6f1ff2" Nov 28 17:19:11 crc kubenswrapper[5024]: E1128 17:19:11.598531 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Nov 28 17:19:11 crc kubenswrapper[5024]: E1128 17:19:11.598704 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-svjcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-skq8p_openstack-operators(09ca01b9-ef1e-443d-90af-101d476cbcb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:12 crc kubenswrapper[5024]: E1128 17:19:12.150279 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3" Nov 28 17:19:12 crc kubenswrapper[5024]: E1128 17:19:12.151059 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sh64d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-546d4bdf48-k8qw6_openstack-operators(c19bfd5c-ac24-41e8-95d0-1c0b6661032d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:12 crc kubenswrapper[5024]: E1128 17:19:12.659521 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Nov 28 17:19:12 crc kubenswrapper[5024]: E1128 17:19:12.659737 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8wlvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-tqqp8_openstack-operators(cdc496b3-475b-4a1a-8426-c5f470030d20): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:12 crc kubenswrapper[5024]: E1128 17:19:12.724466 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.89:5001/openstack-k8s-operators/telemetry-operator:bf35154a77d3f7d42763b9d6bf295684481cdc52" Nov 28 17:19:12 crc kubenswrapper[5024]: E1128 17:19:12.724568 5024 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.89:5001/openstack-k8s-operators/telemetry-operator:bf35154a77d3f7d42763b9d6bf295684481cdc52" Nov 28 17:19:12 crc kubenswrapper[5024]: E1128 17:19:12.724776 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.89:5001/openstack-k8s-operators/telemetry-operator:bf35154a77d3f7d42763b9d6bf295684481cdc52,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6tkxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6b5d64d475-v8bhk_openstack-operators(7bfcb463-0064-4758-bbe8-70b0afd2b3bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:13 crc kubenswrapper[5024]: I1128 17:19:13.357956 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s"] Nov 28 17:19:13 crc kubenswrapper[5024]: I1128 17:19:13.561405 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk"] Nov 28 17:19:13 crc kubenswrapper[5024]: I1128 17:19:13.574619 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr"] Nov 28 17:19:13 crc kubenswrapper[5024]: W1128 17:19:13.710545 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec29f6e1_030b_4bce_a179_102ef4038e17.slice/crio-067db4928a834ba63da783fa3f7ba368b22325ce1c094c3ae7aa00e6d5d6353b WatchSource:0}: Error finding container 067db4928a834ba63da783fa3f7ba368b22325ce1c094c3ae7aa00e6d5d6353b: Status 404 returned error can't find the container with id 067db4928a834ba63da783fa3f7ba368b22325ce1c094c3ae7aa00e6d5d6353b Nov 28 17:19:13 crc kubenswrapper[5024]: W1128 17:19:13.717598 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7178ca93_de7b_4c2b_8235_41c6dbd4b1a1.slice/crio-f94fbddd5c52522eb72bc449f4ca7cc761c7d71e156c55ce8387707ee8347b89 WatchSource:0}: Error finding container f94fbddd5c52522eb72bc449f4ca7cc761c7d71e156c55ce8387707ee8347b89: Status 404 returned error can't find the container with id f94fbddd5c52522eb72bc449f4ca7cc761c7d71e156c55ce8387707ee8347b89 Nov 28 17:19:13 crc kubenswrapper[5024]: W1128 17:19:13.719559 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3a51773_e3f0_4e2f_b53c_8eede799ef4b.slice/crio-4b55a3d60e7c85c04fc5ef3a96ed497984e2c68abd168be8d3d019dc639f603a WatchSource:0}: Error finding container 4b55a3d60e7c85c04fc5ef3a96ed497984e2c68abd168be8d3d019dc639f603a: Status 404 returned error can't find the container with id 4b55a3d60e7c85c04fc5ef3a96ed497984e2c68abd168be8d3d019dc639f603a Nov 28 17:19:14 crc kubenswrapper[5024]: I1128 17:19:14.534762 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" event={"ID":"e3a51773-e3f0-4e2f-b53c-8eede799ef4b","Type":"ContainerStarted","Data":"4b55a3d60e7c85c04fc5ef3a96ed497984e2c68abd168be8d3d019dc639f603a"} Nov 28 17:19:14 crc kubenswrapper[5024]: I1128 17:19:14.538010 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-27b8t" event={"ID":"c98df7f0-4e94-48f8-9ef1-2148b7909e24","Type":"ContainerStarted","Data":"0f8e57888d3abb7b6eca1abc05db878f7a8dba0d9fb23424b9784ba90db15b34"} Nov 28 17:19:14 crc kubenswrapper[5024]: I1128 17:19:14.541266 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-9zx4m" event={"ID":"3d3cfd45-e574-415e-87a6-2fab660d955a","Type":"ContainerStarted","Data":"487203c216d20737b34996643cf55a08d92522d9bd3df5480c158b22f7f121ac"} Nov 28 17:19:14 crc kubenswrapper[5024]: I1128 17:19:14.545258 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" event={"ID":"7178ca93-de7b-4c2b-8235-41c6dbd4b1a1","Type":"ContainerStarted","Data":"f94fbddd5c52522eb72bc449f4ca7cc761c7d71e156c55ce8387707ee8347b89"} Nov 28 17:19:14 crc kubenswrapper[5024]: I1128 17:19:14.547641 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-v2mb6" event={"ID":"433b0a08-3f38-4113-bab1-49eb5f2e0009","Type":"ContainerStarted","Data":"a9c105c1922949980ade82260ecc4834c5aa95efae9345db26ae5a8549713dc8"} Nov 28 17:19:14 crc kubenswrapper[5024]: I1128 17:19:14.549426 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" event={"ID":"ec29f6e1-030b-4bce-a179-102ef4038e17","Type":"ContainerStarted","Data":"067db4928a834ba63da783fa3f7ba368b22325ce1c094c3ae7aa00e6d5d6353b"} Nov 28 17:19:17 crc kubenswrapper[5024]: I1128 17:19:17.580464 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" event={"ID":"e3a51773-e3f0-4e2f-b53c-8eede799ef4b","Type":"ContainerStarted","Data":"4e0e0303d5b10d9d2a0be4cf47691138bb5b269ebcfc8a57d60b9edd3a241da6"} Nov 28 17:19:17 crc kubenswrapper[5024]: I1128 17:19:17.581197 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:19:17 crc kubenswrapper[5024]: I1128 17:19:17.614281 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" podStartSLOduration=44.614262246 podStartE2EDuration="44.614262246s" podCreationTimestamp="2025-11-28 17:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:17.605928062 +0000 UTC m=+1259.654848967" watchObservedRunningTime="2025-11-28 17:19:17.614262246 +0000 UTC m=+1259.663183151" Nov 28 17:19:17 crc kubenswrapper[5024]: E1128 17:19:17.698757 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk" podUID="7bfcb463-0064-4758-bbe8-70b0afd2b3bd" Nov 28 17:19:17 crc kubenswrapper[5024]: E1128 17:19:17.835034 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8" podUID="c242c002-7db6-4753-9e37-8b61faa233e7" Nov 28 17:19:17 crc kubenswrapper[5024]: E1128 17:19:17.881462 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw" podUID="14970290-c7f7-4b41-9238-1c4127416b42" Nov 28 17:19:17 crc kubenswrapper[5024]: E1128 17:19:17.906386 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754" podUID="7b427f08-8eba-4f54-ad75-6cf94b532537" Nov 28 17:19:17 crc kubenswrapper[5024]: E1128 17:19:17.950928 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm" podUID="dd8097de-552e-414a-98d1-314930b2d45b" Nov 28 17:19:18 crc kubenswrapper[5024]: E1128 17:19:18.035585 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m" podUID="306b6495-72ef-41db-8bb8-7e3c7f4105f1" Nov 28 17:19:18 crc kubenswrapper[5024]: E1128 17:19:18.251284 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8" podUID="cdc496b3-475b-4a1a-8426-c5f470030d20" Nov 28 17:19:18 crc kubenswrapper[5024]: E1128 17:19:18.262795 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw" podUID="f3789406-9551-4b4e-9145-86152566a0f8" Nov 28 17:19:18 crc kubenswrapper[5024]: E1128 17:19:18.408493 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl" podUID="0c2c7e62-d724-45fa-8058-085b951992fc" Nov 28 17:19:18 crc kubenswrapper[5024]: E1128 17:19:18.430371 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn" podUID="fd737aa9-6973-41a6-8b79-03d85540253c" Nov 28 17:19:18 crc kubenswrapper[5024]: E1128 17:19:18.476942 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv" podUID="8f617e42-6f3a-45cd-86c7-58b571a13c00" Nov 28 17:19:18 crc kubenswrapper[5024]: E1128 17:19:18.655440 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-98vj7" podUID="6634c4c8-389e-4b40-bc1b-c21e833569cd" Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.671871 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8" event={"ID":"c242c002-7db6-4753-9e37-8b61faa233e7","Type":"ContainerStarted","Data":"47e9923c2de2d74c744a9afa4645686a3d099b9b1cdfe5b71c4b2d545749a2ee"} Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.679919 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv" event={"ID":"8f617e42-6f3a-45cd-86c7-58b571a13c00","Type":"ContainerStarted","Data":"0c70abd9c8c8d5122673df5fd5311ca49071bdfb473e8d2c0d8b3d6c215997d5"} Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.732400 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw" event={"ID":"f3789406-9551-4b4e-9145-86152566a0f8","Type":"ContainerStarted","Data":"6e66e5d9893eddf5c2f74fc723ec1f1b36f6ac805c2a3991559eac2236a3a39d"} Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.748143 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754" event={"ID":"7b427f08-8eba-4f54-ad75-6cf94b532537","Type":"ContainerStarted","Data":"bf202dc9275d02670c3822cc8f2b2b259fe96a2429d29ad7f3e6337003bc13e2"} Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.767209 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-27b8t" Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.776007 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-9zx4m" event={"ID":"3d3cfd45-e574-415e-87a6-2fab660d955a","Type":"ContainerStarted","Data":"262044dab6969f35a1778ba57731f48b2ffd1d16b60b01b0e1c6292ec397dc7a"} Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.779255 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-9zx4m" Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.812669 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl" event={"ID":"0c2c7e62-d724-45fa-8058-085b951992fc","Type":"ContainerStarted","Data":"ed18aef70a7cae7ee0c4bed212464c2e19f6797b70e1859f5c6b146cbead7f48"} Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.830138 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-v2mb6" event={"ID":"433b0a08-3f38-4113-bab1-49eb5f2e0009","Type":"ContainerStarted","Data":"14164d88f4e386e739c08578b66e8c7b5c969f212ab396665a9f542f279f4fbe"} Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.831003 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-v2mb6" Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.839248 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-9zx4m" podStartSLOduration=3.790354859 podStartE2EDuration="45.839224315s" podCreationTimestamp="2025-11-28 17:18:33 +0000 UTC" firstStartedPulling="2025-11-28 17:18:35.838409234 +0000 UTC m=+1217.887330139" lastFinishedPulling="2025-11-28 17:19:17.88727869 +0000 UTC m=+1259.936199595" observedRunningTime="2025-11-28 17:19:18.82120703 +0000 UTC m=+1260.870127935" watchObservedRunningTime="2025-11-28 17:19:18.839224315 +0000 UTC m=+1260.888145220" Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.842274 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8" event={"ID":"cdc496b3-475b-4a1a-8426-c5f470030d20","Type":"ContainerStarted","Data":"3344e1c9b7708fdf0fc861d74b0822f6b20e714b9934e9fde8de891f63824946"} Nov 28 17:19:18 crc kubenswrapper[5024]: E1128 17:19:18.847205 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8" podUID="cdc496b3-475b-4a1a-8426-c5f470030d20" Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.858207 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn" event={"ID":"fd737aa9-6973-41a6-8b79-03d85540253c","Type":"ContainerStarted","Data":"6c0102ce1c5c23bd058c905144267ac33e42af36cdbfa75ecb0aeeeada31eaca"} Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.864209 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-27b8t" podStartSLOduration=4.49071852 podStartE2EDuration="46.864184006s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:35.819985119 +0000 UTC m=+1217.868906024" lastFinishedPulling="2025-11-28 17:19:18.193450605 +0000 UTC m=+1260.242371510" observedRunningTime="2025-11-28 17:19:18.853297824 +0000 UTC m=+1260.902218729" watchObservedRunningTime="2025-11-28 17:19:18.864184006 +0000 UTC m=+1260.913104921" Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.867453 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm" event={"ID":"dd8097de-552e-414a-98d1-314930b2d45b","Type":"ContainerStarted","Data":"6e6c15d34a0b9c3a25dfe616a33099f8fce85f2d45ec216a9e8c8ac6bc0a94d2"} Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.871055 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m" event={"ID":"306b6495-72ef-41db-8bb8-7e3c7f4105f1","Type":"ContainerStarted","Data":"30c95a456aab065ac73d5047020087cf42f6b35765f79b74c796b26596f72b26"} Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.879501 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw" event={"ID":"14970290-c7f7-4b41-9238-1c4127416b42","Type":"ContainerStarted","Data":"d0b11bb658165d44a9fd675c1703aa48fbb97bbcc48b363bac80e84270c8e3bd"} Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.891009 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-v2mb6" podStartSLOduration=3.162846173 podStartE2EDuration="46.890992397s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:34.140231737 +0000 UTC m=+1216.189152632" lastFinishedPulling="2025-11-28 17:19:17.868377951 +0000 UTC m=+1259.917298856" observedRunningTime="2025-11-28 17:19:18.877607547 +0000 UTC m=+1260.926528452" watchObservedRunningTime="2025-11-28 17:19:18.890992397 +0000 UTC m=+1260.939913292" Nov 28 17:19:18 crc kubenswrapper[5024]: I1128 17:19:18.898137 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk" event={"ID":"7bfcb463-0064-4758-bbe8-70b0afd2b3bd","Type":"ContainerStarted","Data":"f9cf1bf3db7bdbc5f765d57b2b6bc20b833b1bd13cba5c10d63a2452eba513c4"} Nov 28 17:19:18 crc kubenswrapper[5024]: E1128 17:19:18.906220 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.89:5001/openstack-k8s-operators/telemetry-operator:bf35154a77d3f7d42763b9d6bf295684481cdc52\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk" podUID="7bfcb463-0064-4758-bbe8-70b0afd2b3bd" Nov 28 17:19:19 crc kubenswrapper[5024]: I1128 17:19:19.948487 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-27b8t" event={"ID":"c98df7f0-4e94-48f8-9ef1-2148b7909e24","Type":"ContainerStarted","Data":"d38dbcb62c687a9e952c312f216fdff2a468ec03990fd446034438d7dd351e2c"} Nov 28 17:19:19 crc kubenswrapper[5024]: I1128 17:19:19.951712 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-27b8t" Nov 28 17:19:19 crc kubenswrapper[5024]: I1128 17:19:19.952841 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-98vj7" event={"ID":"6634c4c8-389e-4b40-bc1b-c21e833569cd","Type":"ContainerStarted","Data":"d6820ebdadf05d82a004ec366f2eaf3a24a1addecd4220386390582cd0c3606d"} Nov 28 17:19:19 crc kubenswrapper[5024]: I1128 17:19:19.962393 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8" event={"ID":"c242c002-7db6-4753-9e37-8b61faa233e7","Type":"ContainerStarted","Data":"a7656531ed2bdf5d28bf6112cc30856e3367724475007d4030c953f00d895d20"} Nov 28 17:19:19 crc kubenswrapper[5024]: I1128 17:19:19.962441 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8" Nov 28 17:19:19 crc kubenswrapper[5024]: E1128 17:19:19.969332 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8" podUID="cdc496b3-475b-4a1a-8426-c5f470030d20" Nov 28 17:19:19 crc kubenswrapper[5024]: E1128 17:19:19.969411 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.89:5001/openstack-k8s-operators/telemetry-operator:bf35154a77d3f7d42763b9d6bf295684481cdc52\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk" podUID="7bfcb463-0064-4758-bbe8-70b0afd2b3bd" Nov 28 17:19:19 crc kubenswrapper[5024]: I1128 17:19:19.970140 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-9zx4m" Nov 28 17:19:19 crc kubenswrapper[5024]: I1128 17:19:19.970501 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-v2mb6" Nov 28 17:19:20 crc kubenswrapper[5024]: I1128 17:19:20.084913 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8" podStartSLOduration=3.016125727 podStartE2EDuration="48.0848935s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:34.390619962 +0000 UTC m=+1216.439540867" lastFinishedPulling="2025-11-28 17:19:19.459387735 +0000 UTC m=+1261.508308640" observedRunningTime="2025-11-28 17:19:20.07524071 +0000 UTC m=+1262.124161615" watchObservedRunningTime="2025-11-28 17:19:20.0848935 +0000 UTC m=+1262.133814405" Nov 28 17:19:20 crc kubenswrapper[5024]: I1128 17:19:20.970599 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv" event={"ID":"8f617e42-6f3a-45cd-86c7-58b571a13c00","Type":"ContainerStarted","Data":"ec1042193bf73603e830d4b0772143f7802052f2eecb8a4cd7b1529a54f9daf4"} Nov 28 17:19:20 crc kubenswrapper[5024]: I1128 17:19:20.971038 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv" Nov 28 17:19:20 crc kubenswrapper[5024]: I1128 17:19:20.974317 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw" event={"ID":"f3789406-9551-4b4e-9145-86152566a0f8","Type":"ContainerStarted","Data":"576defeb0c687629ef935b3f20c7444b4bfcfb08b85a27c3486b10f91195a5ef"} Nov 28 17:19:20 crc kubenswrapper[5024]: I1128 17:19:20.974741 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw" Nov 28 17:19:20 crc kubenswrapper[5024]: I1128 17:19:20.976611 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl" event={"ID":"0c2c7e62-d724-45fa-8058-085b951992fc","Type":"ContainerStarted","Data":"d6356b9941194b5edf45710d306fe2be9e382b2c7b852e27033d63376f4caace"} Nov 28 17:19:20 crc kubenswrapper[5024]: I1128 17:19:20.976798 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl" Nov 28 17:19:20 crc kubenswrapper[5024]: I1128 17:19:20.980230 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754" event={"ID":"7b427f08-8eba-4f54-ad75-6cf94b532537","Type":"ContainerStarted","Data":"2372d55ce9346d2c2100d6f14bc9d7e3dc206021d54a6386ddf756a762282c57"} Nov 28 17:19:20 crc kubenswrapper[5024]: I1128 17:19:20.980265 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754" Nov 28 17:19:21 crc kubenswrapper[5024]: I1128 17:19:21.001002 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv" podStartSLOduration=3.682148672 podStartE2EDuration="49.000982661s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:34.14032892 +0000 UTC m=+1216.189249825" lastFinishedPulling="2025-11-28 17:19:19.459162909 +0000 UTC m=+1261.508083814" observedRunningTime="2025-11-28 17:19:20.997109857 +0000 UTC m=+1263.046030762" watchObservedRunningTime="2025-11-28 17:19:21.000982661 +0000 UTC m=+1263.049903566" Nov 28 17:19:21 crc kubenswrapper[5024]: I1128 17:19:21.037308 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl" podStartSLOduration=4.219538406 podStartE2EDuration="49.037277797s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:34.807131105 +0000 UTC m=+1216.856052000" lastFinishedPulling="2025-11-28 17:19:19.624870476 +0000 UTC m=+1261.673791391" observedRunningTime="2025-11-28 17:19:21.027111174 +0000 UTC m=+1263.076032079" watchObservedRunningTime="2025-11-28 17:19:21.037277797 +0000 UTC m=+1263.086198712" Nov 28 17:19:21 crc kubenswrapper[5024]: I1128 17:19:21.061144 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw" podStartSLOduration=4.342564355 podStartE2EDuration="49.061116218s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:34.833594657 +0000 UTC m=+1216.882515562" lastFinishedPulling="2025-11-28 17:19:19.55214652 +0000 UTC m=+1261.601067425" observedRunningTime="2025-11-28 17:19:21.045047076 +0000 UTC m=+1263.093967981" watchObservedRunningTime="2025-11-28 17:19:21.061116218 +0000 UTC m=+1263.110037123" Nov 28 17:19:21 crc kubenswrapper[5024]: I1128 17:19:21.076082 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754" podStartSLOduration=4.274649898 podStartE2EDuration="49.0760592s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:34.822762996 +0000 UTC m=+1216.871683901" lastFinishedPulling="2025-11-28 17:19:19.624172298 +0000 UTC m=+1261.673093203" observedRunningTime="2025-11-28 17:19:21.061326354 +0000 UTC m=+1263.110247259" watchObservedRunningTime="2025-11-28 17:19:21.0760592 +0000 UTC m=+1263.124980115" Nov 28 17:19:25 crc kubenswrapper[5024]: I1128 17:19:25.026099 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn" event={"ID":"fd737aa9-6973-41a6-8b79-03d85540253c","Type":"ContainerStarted","Data":"42c9131d5194538102c58283905cb4016a304d03fe212a05658fef1012b66da6"} Nov 28 17:19:26 crc kubenswrapper[5024]: I1128 17:19:26.077607 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-668879d68f-zgrkk" Nov 28 17:19:27 crc kubenswrapper[5024]: I1128 17:19:27.061353 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm" event={"ID":"dd8097de-552e-414a-98d1-314930b2d45b","Type":"ContainerStarted","Data":"a8ffd78734db7974512e70dfb5b7abcad9ab2bdf2f0eb5c155cb80579a40cd8d"} Nov 28 17:19:27 crc kubenswrapper[5024]: I1128 17:19:27.083906 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m" event={"ID":"306b6495-72ef-41db-8bb8-7e3c7f4105f1","Type":"ContainerStarted","Data":"db01c5bc929955b7ca90847c5e0081c1fdbd52b8fa1a448369ea52c85d893bca"} Nov 28 17:19:27 crc kubenswrapper[5024]: I1128 17:19:27.084954 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m" Nov 28 17:19:27 crc kubenswrapper[5024]: I1128 17:19:27.098535 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw" event={"ID":"14970290-c7f7-4b41-9238-1c4127416b42","Type":"ContainerStarted","Data":"70601cbc753fab28fdf9b8a087743fc5678551c9c8032c95b8cdd0acdd09af30"} Nov 28 17:19:27 crc kubenswrapper[5024]: I1128 17:19:27.098694 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn" Nov 28 17:19:27 crc kubenswrapper[5024]: I1128 17:19:27.123530 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m" podStartSLOduration=9.335692927 podStartE2EDuration="55.123507632s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:33.852162598 +0000 UTC m=+1215.901083503" lastFinishedPulling="2025-11-28 17:19:19.639977313 +0000 UTC m=+1261.688898208" observedRunningTime="2025-11-28 17:19:27.119433812 +0000 UTC m=+1269.168354717" watchObservedRunningTime="2025-11-28 17:19:27.123507632 +0000 UTC m=+1269.172428537" Nov 28 17:19:27 crc kubenswrapper[5024]: I1128 17:19:27.152135 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn" podStartSLOduration=11.36119661 podStartE2EDuration="55.152117452s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:35.803473295 +0000 UTC m=+1217.852394200" lastFinishedPulling="2025-11-28 17:19:19.594394147 +0000 UTC m=+1261.643315042" observedRunningTime="2025-11-28 17:19:27.150063286 +0000 UTC m=+1269.198984191" watchObservedRunningTime="2025-11-28 17:19:27.152117452 +0000 UTC m=+1269.201038357" Nov 28 17:19:28 crc kubenswrapper[5024]: I1128 17:19:28.108474 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm" Nov 28 17:19:28 crc kubenswrapper[5024]: I1128 17:19:28.129289 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm" podStartSLOduration=11.312001936 podStartE2EDuration="56.129261204s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:34.805890762 +0000 UTC m=+1216.854811667" lastFinishedPulling="2025-11-28 17:19:19.62315003 +0000 UTC m=+1261.672070935" observedRunningTime="2025-11-28 17:19:28.128153014 +0000 UTC m=+1270.177073919" watchObservedRunningTime="2025-11-28 17:19:28.129261204 +0000 UTC m=+1270.178182109" Nov 28 17:19:29 crc kubenswrapper[5024]: I1128 17:19:29.133971 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw" podStartSLOduration=12.473550068 podStartE2EDuration="57.133954938s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:35.237731217 +0000 UTC m=+1217.286652122" lastFinishedPulling="2025-11-28 17:19:19.898136087 +0000 UTC m=+1261.947056992" observedRunningTime="2025-11-28 17:19:29.127640498 +0000 UTC m=+1271.176561403" watchObservedRunningTime="2025-11-28 17:19:29.133954938 +0000 UTC m=+1271.182875843" Nov 28 17:19:29 crc kubenswrapper[5024]: E1128 17:19:29.630322 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 28 17:19:29 crc kubenswrapper[5024]: E1128 17:19:29.630550 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sh64d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-546d4bdf48-k8qw6_openstack-operators(c19bfd5c-ac24-41e8-95d0-1c0b6661032d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:29 crc kubenswrapper[5024]: E1128 17:19:29.631717 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6" podUID="c19bfd5c-ac24-41e8-95d0-1c0b6661032d" Nov 28 17:19:30 crc kubenswrapper[5024]: I1128 17:19:30.141543 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" event={"ID":"7178ca93-de7b-4c2b-8235-41c6dbd4b1a1","Type":"ContainerStarted","Data":"97192cfc2f4a83fe92d447915bb0dd6ac4e8c11dc621b5088eba9e8ce50247e0"} Nov 28 17:19:30 crc kubenswrapper[5024]: E1128 17:19:30.143431 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 28 17:19:30 crc kubenswrapper[5024]: E1128 17:19:30.143635 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-svjcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-skq8p_openstack-operators(09ca01b9-ef1e-443d-90af-101d476cbcb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:30 crc kubenswrapper[5024]: E1128 17:19:30.144718 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" podUID="09ca01b9-ef1e-443d-90af-101d476cbcb5" Nov 28 17:19:30 crc kubenswrapper[5024]: E1128 17:19:30.208960 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 28 17:19:30 crc kubenswrapper[5024]: E1128 17:19:30.209161 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ndk4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-t8wwx_openstack-operators(3052f534-e5d3-4ac8-8865-8a6de75dc6a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:30 crc kubenswrapper[5024]: E1128 17:19:30.210556 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx" podUID="3052f534-e5d3-4ac8-8865-8a6de75dc6a2" Nov 28 17:19:30 crc kubenswrapper[5024]: E1128 17:19:30.401105 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6" podUID="f9991185-b617-4567-b70f-4adf629d5aab" Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.151483 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" event={"ID":"ec29f6e1-030b-4bce-a179-102ef4038e17","Type":"ContainerStarted","Data":"79687feac5c4ce5fe860a044b80d5d0440f2f55a07c71eb61e55b99f2117962f"} Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.151537 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" event={"ID":"ec29f6e1-030b-4bce-a179-102ef4038e17","Type":"ContainerStarted","Data":"9abb158d97e91a37c03e3871ca3846bf269afd6f9b0fea610dd2b274d30a705f"} Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.152757 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.157354 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6" event={"ID":"c19bfd5c-ac24-41e8-95d0-1c0b6661032d","Type":"ContainerStarted","Data":"f41622d89d5dffb9da01bdf5fe500db6b9eb94ee56109f80fdc4681205bfeaa9"} Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.159415 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk" event={"ID":"7bfcb463-0064-4758-bbe8-70b0afd2b3bd","Type":"ContainerStarted","Data":"1600a63d82d1ec7fdd28beb5180a8f588af832f39edb238c407138a3f8dafbff"} Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.159589 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk" Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.160967 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phvrw" event={"ID":"c8d40417-67d5-4a1c-ab22-1f2afd6f1ff2","Type":"ContainerStarted","Data":"56fc49eebe30ae64ab12e369ac13bbdaa9f573f3271312fd615ad3f87cdd9cae"} Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.162226 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-98vj7" event={"ID":"6634c4c8-389e-4b40-bc1b-c21e833569cd","Type":"ContainerStarted","Data":"400836e10f8a4927808a4ac2dd3816eab568a9d440ab1c8181776b61d05d5adf"} Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.163650 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-98vj7" Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.164904 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6" event={"ID":"f9991185-b617-4567-b70f-4adf629d5aab","Type":"ContainerStarted","Data":"6728ef6727f84baf07a8424e2d7999c57ba8bb50b1e0d71a65666a91232087d1"} Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.172209 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" event={"ID":"7178ca93-de7b-4c2b-8235-41c6dbd4b1a1","Type":"ContainerStarted","Data":"e6ad3466597fe6c0456f5fdf30c7314df90a6f42a75f6768c5907b0cf6888623"} Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.172609 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.201748 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" podStartSLOduration=43.222292048 podStartE2EDuration="59.201727126s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:19:13.714362088 +0000 UTC m=+1255.763282993" lastFinishedPulling="2025-11-28 17:19:29.693797166 +0000 UTC m=+1271.742718071" observedRunningTime="2025-11-28 17:19:31.180675839 +0000 UTC m=+1273.229596744" watchObservedRunningTime="2025-11-28 17:19:31.201727126 +0000 UTC m=+1273.250648031" Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.221270 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk" podStartSLOduration=3.192703504 podStartE2EDuration="58.221252291s" podCreationTimestamp="2025-11-28 17:18:33 +0000 UTC" firstStartedPulling="2025-11-28 17:18:35.800410472 +0000 UTC m=+1217.849331387" lastFinishedPulling="2025-11-28 17:19:30.828959269 +0000 UTC m=+1272.877880174" observedRunningTime="2025-11-28 17:19:31.217318705 +0000 UTC m=+1273.266239610" watchObservedRunningTime="2025-11-28 17:19:31.221252291 +0000 UTC m=+1273.270173196" Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.243043 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" podStartSLOduration=43.281474878 podStartE2EDuration="59.243009046s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:19:13.724835299 +0000 UTC m=+1255.773756204" lastFinishedPulling="2025-11-28 17:19:29.686369467 +0000 UTC m=+1271.735290372" observedRunningTime="2025-11-28 17:19:31.240311413 +0000 UTC m=+1273.289232318" watchObservedRunningTime="2025-11-28 17:19:31.243009046 +0000 UTC m=+1273.291929951" Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.263675 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phvrw" podStartSLOduration=4.098325053 podStartE2EDuration="58.263658921s" podCreationTimestamp="2025-11-28 17:18:33 +0000 UTC" firstStartedPulling="2025-11-28 17:18:35.876747616 +0000 UTC m=+1217.925668521" lastFinishedPulling="2025-11-28 17:19:30.042081484 +0000 UTC m=+1272.091002389" observedRunningTime="2025-11-28 17:19:31.254817064 +0000 UTC m=+1273.303737979" watchObservedRunningTime="2025-11-28 17:19:31.263658921 +0000 UTC m=+1273.312579826" Nov 28 17:19:31 crc kubenswrapper[5024]: I1128 17:19:31.324074 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-98vj7" podStartSLOduration=4.609194836 podStartE2EDuration="59.324057076s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:35.32780883 +0000 UTC m=+1217.376729735" lastFinishedPulling="2025-11-28 17:19:30.04267107 +0000 UTC m=+1272.091591975" observedRunningTime="2025-11-28 17:19:31.320178822 +0000 UTC m=+1273.369099727" watchObservedRunningTime="2025-11-28 17:19:31.324057076 +0000 UTC m=+1273.372977981" Nov 28 17:19:32 crc kubenswrapper[5024]: I1128 17:19:32.182219 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx" event={"ID":"3052f534-e5d3-4ac8-8865-8a6de75dc6a2","Type":"ContainerStarted","Data":"5381580223a9719164c3ac275a34cbf3224ae2d2f816b81618071bbb2f203f1a"} Nov 28 17:19:32 crc kubenswrapper[5024]: I1128 17:19:32.182279 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx" event={"ID":"3052f534-e5d3-4ac8-8865-8a6de75dc6a2","Type":"ContainerStarted","Data":"7384353ac71b1ac526faafd935b0b080fece9dbac09c48785454511522632095"} Nov 28 17:19:32 crc kubenswrapper[5024]: I1128 17:19:32.183587 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx" Nov 28 17:19:32 crc kubenswrapper[5024]: I1128 17:19:32.191312 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6" event={"ID":"f9991185-b617-4567-b70f-4adf629d5aab","Type":"ContainerStarted","Data":"0cf1e7492d05bbe95b16cec8e78d048ca01a2bfc6b9b6037c6fe482cde976b63"} Nov 28 17:19:32 crc kubenswrapper[5024]: I1128 17:19:32.192138 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6" Nov 28 17:19:32 crc kubenswrapper[5024]: I1128 17:19:32.195605 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6" event={"ID":"c19bfd5c-ac24-41e8-95d0-1c0b6661032d","Type":"ContainerStarted","Data":"c3f31aaf6c69d1948d073d5f84a6f9d9b1a46d28985dc706bc3d50005e8aba6d"} Nov 28 17:19:32 crc kubenswrapper[5024]: I1128 17:19:32.218211 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx" podStartSLOduration=3.946986945 podStartE2EDuration="1m0.218187086s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:35.387338392 +0000 UTC m=+1217.436259297" lastFinishedPulling="2025-11-28 17:19:31.658538533 +0000 UTC m=+1273.707459438" observedRunningTime="2025-11-28 17:19:32.212259377 +0000 UTC m=+1274.261180292" watchObservedRunningTime="2025-11-28 17:19:32.218187086 +0000 UTC m=+1274.267107991" Nov 28 17:19:32 crc kubenswrapper[5024]: I1128 17:19:32.243492 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6" podStartSLOduration=4.186836135 podStartE2EDuration="1m0.243474966s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:34.822157819 +0000 UTC m=+1216.871078724" lastFinishedPulling="2025-11-28 17:19:30.87879665 +0000 UTC m=+1272.927717555" observedRunningTime="2025-11-28 17:19:32.242550832 +0000 UTC m=+1274.291471737" watchObservedRunningTime="2025-11-28 17:19:32.243474966 +0000 UTC m=+1274.292395871" Nov 28 17:19:33 crc kubenswrapper[5024]: I1128 17:19:33.078349 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-b7b9m" Nov 28 17:19:33 crc kubenswrapper[5024]: I1128 17:19:33.104781 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6" podStartSLOduration=4.999740752 podStartE2EDuration="1m1.104762503s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:35.792092139 +0000 UTC m=+1217.841013054" lastFinishedPulling="2025-11-28 17:19:31.89711391 +0000 UTC m=+1273.946034805" observedRunningTime="2025-11-28 17:19:32.283836442 +0000 UTC m=+1274.332757347" watchObservedRunningTime="2025-11-28 17:19:33.104762503 +0000 UTC m=+1275.153683408" Nov 28 17:19:33 crc kubenswrapper[5024]: I1128 17:19:33.144760 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-mvhfv" Nov 28 17:19:33 crc kubenswrapper[5024]: I1128 17:19:33.208178 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8" event={"ID":"cdc496b3-475b-4a1a-8426-c5f470030d20","Type":"ContainerStarted","Data":"08af966f5bcf931425185fc2322438cc2eaaaba6d4caa7435616e836770d49b6"} Nov 28 17:19:33 crc kubenswrapper[5024]: I1128 17:19:33.210420 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6" Nov 28 17:19:33 crc kubenswrapper[5024]: I1128 17:19:33.327471 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-5vxc8" Nov 28 17:19:33 crc kubenswrapper[5024]: I1128 17:19:33.518922 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-vk754" Nov 28 17:19:33 crc kubenswrapper[5024]: I1128 17:19:33.531661 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-htnxm" Nov 28 17:19:33 crc kubenswrapper[5024]: I1128 17:19:33.599073 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6wjhl" Nov 28 17:19:33 crc kubenswrapper[5024]: I1128 17:19:33.713829 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-xb9dw" Nov 28 17:19:33 crc kubenswrapper[5024]: I1128 17:19:33.779436 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw" Nov 28 17:19:33 crc kubenswrapper[5024]: I1128 17:19:33.791781 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nwtnw" Nov 28 17:19:33 crc kubenswrapper[5024]: I1128 17:19:33.944349 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-gdvrn" Nov 28 17:19:34 crc kubenswrapper[5024]: I1128 17:19:34.215346 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8" Nov 28 17:19:34 crc kubenswrapper[5024]: I1128 17:19:34.240799 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8" podStartSLOduration=4.708086385 podStartE2EDuration="1m2.240766507s" podCreationTimestamp="2025-11-28 17:18:32 +0000 UTC" firstStartedPulling="2025-11-28 17:18:35.431414467 +0000 UTC m=+1217.480335372" lastFinishedPulling="2025-11-28 17:19:32.964094589 +0000 UTC m=+1275.013015494" observedRunningTime="2025-11-28 17:19:34.23641087 +0000 UTC m=+1276.285331815" watchObservedRunningTime="2025-11-28 17:19:34.240766507 +0000 UTC m=+1276.289687432" Nov 28 17:19:35 crc kubenswrapper[5024]: I1128 17:19:35.801739 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr" Nov 28 17:19:39 crc kubenswrapper[5024]: I1128 17:19:39.156677 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-nxs7s" Nov 28 17:19:43 crc kubenswrapper[5024]: I1128 17:19:43.595209 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-k8qw6" Nov 28 17:19:43 crc kubenswrapper[5024]: I1128 17:19:43.875488 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-t8wwx" Nov 28 17:19:43 crc kubenswrapper[5024]: I1128 17:19:43.937404 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-98vj7" Nov 28 17:19:43 crc kubenswrapper[5024]: I1128 17:19:43.938748 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-tqqp8" Nov 28 17:19:44 crc kubenswrapper[5024]: I1128 17:19:44.080835 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-hrbx6" Nov 28 17:19:44 crc kubenswrapper[5024]: I1128 17:19:44.231984 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-v8bhk" Nov 28 17:19:46 crc kubenswrapper[5024]: I1128 17:19:46.418541 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" event={"ID":"09ca01b9-ef1e-443d-90af-101d476cbcb5","Type":"ContainerStarted","Data":"850bf6882fce46db1065ed15447937f08f7a7724dafa3192723d793c5b6f3514"} Nov 28 17:19:46 crc kubenswrapper[5024]: I1128 17:19:46.419247 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" event={"ID":"09ca01b9-ef1e-443d-90af-101d476cbcb5","Type":"ContainerStarted","Data":"24b3aab3bbf91b524abc3ef3a5744fd66783dc1765f4528bffb973b799b2dcc0"} Nov 28 17:19:46 crc kubenswrapper[5024]: I1128 17:19:46.419433 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" Nov 28 17:19:46 crc kubenswrapper[5024]: I1128 17:19:46.441898 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" podStartSLOduration=3.400113163 podStartE2EDuration="1m13.441878839s" podCreationTimestamp="2025-11-28 17:18:33 +0000 UTC" firstStartedPulling="2025-11-28 17:18:35.949209525 +0000 UTC m=+1217.998130430" lastFinishedPulling="2025-11-28 17:19:45.990975201 +0000 UTC m=+1288.039896106" observedRunningTime="2025-11-28 17:19:46.43595022 +0000 UTC m=+1288.484871125" watchObservedRunningTime="2025-11-28 17:19:46.441878839 +0000 UTC m=+1288.490799744" Nov 28 17:19:54 crc kubenswrapper[5024]: I1128 17:19:54.242622 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-skq8p" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.222746 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gx9kn"] Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.224770 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gx9kn" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.231090 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.231332 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.231579 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.231612 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-49bws" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.240306 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gx9kn"] Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.276261 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l7gzz"] Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.278234 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.282133 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.322358 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l7gzz"] Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.348332 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgcg5\" (UniqueName: \"kubernetes.io/projected/ef7b3aae-0376-47da-a875-80861382c90c-kube-api-access-wgcg5\") pod \"dnsmasq-dns-675f4bcbfc-gx9kn\" (UID: \"ef7b3aae-0376-47da-a875-80861382c90c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gx9kn" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.348515 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7b3aae-0376-47da-a875-80861382c90c-config\") pod \"dnsmasq-dns-675f4bcbfc-gx9kn\" (UID: \"ef7b3aae-0376-47da-a875-80861382c90c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gx9kn" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.451989 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgcg5\" (UniqueName: \"kubernetes.io/projected/ef7b3aae-0376-47da-a875-80861382c90c-kube-api-access-wgcg5\") pod \"dnsmasq-dns-675f4bcbfc-gx9kn\" (UID: \"ef7b3aae-0376-47da-a875-80861382c90c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gx9kn" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.452587 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e825cb-cc43-43cc-9b9d-f376e964c371-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-l7gzz\" (UID: \"29e825cb-cc43-43cc-9b9d-f376e964c371\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.452957 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5hrb\" (UniqueName: \"kubernetes.io/projected/29e825cb-cc43-43cc-9b9d-f376e964c371-kube-api-access-x5hrb\") pod \"dnsmasq-dns-78dd6ddcc-l7gzz\" (UID: \"29e825cb-cc43-43cc-9b9d-f376e964c371\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.453199 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7b3aae-0376-47da-a875-80861382c90c-config\") pod \"dnsmasq-dns-675f4bcbfc-gx9kn\" (UID: \"ef7b3aae-0376-47da-a875-80861382c90c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gx9kn" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.453374 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e825cb-cc43-43cc-9b9d-f376e964c371-config\") pod \"dnsmasq-dns-78dd6ddcc-l7gzz\" (UID: \"29e825cb-cc43-43cc-9b9d-f376e964c371\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.454089 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7b3aae-0376-47da-a875-80861382c90c-config\") pod \"dnsmasq-dns-675f4bcbfc-gx9kn\" (UID: \"ef7b3aae-0376-47da-a875-80861382c90c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gx9kn" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.513411 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgcg5\" (UniqueName: \"kubernetes.io/projected/ef7b3aae-0376-47da-a875-80861382c90c-kube-api-access-wgcg5\") pod \"dnsmasq-dns-675f4bcbfc-gx9kn\" (UID: \"ef7b3aae-0376-47da-a875-80861382c90c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gx9kn" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.555486 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e825cb-cc43-43cc-9b9d-f376e964c371-config\") pod \"dnsmasq-dns-78dd6ddcc-l7gzz\" (UID: \"29e825cb-cc43-43cc-9b9d-f376e964c371\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.555859 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e825cb-cc43-43cc-9b9d-f376e964c371-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-l7gzz\" (UID: \"29e825cb-cc43-43cc-9b9d-f376e964c371\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.556106 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5hrb\" (UniqueName: \"kubernetes.io/projected/29e825cb-cc43-43cc-9b9d-f376e964c371-kube-api-access-x5hrb\") pod \"dnsmasq-dns-78dd6ddcc-l7gzz\" (UID: \"29e825cb-cc43-43cc-9b9d-f376e964c371\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.556856 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e825cb-cc43-43cc-9b9d-f376e964c371-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-l7gzz\" (UID: \"29e825cb-cc43-43cc-9b9d-f376e964c371\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.557415 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e825cb-cc43-43cc-9b9d-f376e964c371-config\") pod \"dnsmasq-dns-78dd6ddcc-l7gzz\" (UID: \"29e825cb-cc43-43cc-9b9d-f376e964c371\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.558265 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gx9kn" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.581189 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5hrb\" (UniqueName: \"kubernetes.io/projected/29e825cb-cc43-43cc-9b9d-f376e964c371-kube-api-access-x5hrb\") pod \"dnsmasq-dns-78dd6ddcc-l7gzz\" (UID: \"29e825cb-cc43-43cc-9b9d-f376e964c371\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" Nov 28 17:20:09 crc kubenswrapper[5024]: I1128 17:20:09.621180 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" Nov 28 17:20:10 crc kubenswrapper[5024]: W1128 17:20:10.092260 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef7b3aae_0376_47da_a875_80861382c90c.slice/crio-6ed3e0c173bf1a791cd48e829774ab16a54e265c5826e1227923b703290ea1b9 WatchSource:0}: Error finding container 6ed3e0c173bf1a791cd48e829774ab16a54e265c5826e1227923b703290ea1b9: Status 404 returned error can't find the container with id 6ed3e0c173bf1a791cd48e829774ab16a54e265c5826e1227923b703290ea1b9 Nov 28 17:20:10 crc kubenswrapper[5024]: I1128 17:20:10.094342 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gx9kn"] Nov 28 17:20:10 crc kubenswrapper[5024]: I1128 17:20:10.202191 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l7gzz"] Nov 28 17:20:10 crc kubenswrapper[5024]: I1128 17:20:10.655544 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-gx9kn" event={"ID":"ef7b3aae-0376-47da-a875-80861382c90c","Type":"ContainerStarted","Data":"6ed3e0c173bf1a791cd48e829774ab16a54e265c5826e1227923b703290ea1b9"} Nov 28 17:20:10 crc kubenswrapper[5024]: I1128 17:20:10.656566 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" event={"ID":"29e825cb-cc43-43cc-9b9d-f376e964c371","Type":"ContainerStarted","Data":"f24b14c3bcd9f64414079053e289a12b93e0affb52cdf085d9a68d65b4374a05"} Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.128217 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gx9kn"] Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.158449 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4czkq"] Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.160955 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4czkq" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.177703 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4czkq"] Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.316124 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4czkq\" (UID: \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\") " pod="openstack/dnsmasq-dns-666b6646f7-4czkq" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.316292 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-config\") pod \"dnsmasq-dns-666b6646f7-4czkq\" (UID: \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\") " pod="openstack/dnsmasq-dns-666b6646f7-4czkq" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.316322 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5tbx\" (UniqueName: \"kubernetes.io/projected/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-kube-api-access-q5tbx\") pod \"dnsmasq-dns-666b6646f7-4czkq\" (UID: \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\") " pod="openstack/dnsmasq-dns-666b6646f7-4czkq" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.417919 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4czkq\" (UID: \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\") " pod="openstack/dnsmasq-dns-666b6646f7-4czkq" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.417998 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-config\") pod \"dnsmasq-dns-666b6646f7-4czkq\" (UID: \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\") " pod="openstack/dnsmasq-dns-666b6646f7-4czkq" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.418031 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5tbx\" (UniqueName: \"kubernetes.io/projected/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-kube-api-access-q5tbx\") pod \"dnsmasq-dns-666b6646f7-4czkq\" (UID: \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\") " pod="openstack/dnsmasq-dns-666b6646f7-4czkq" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.419205 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-config\") pod \"dnsmasq-dns-666b6646f7-4czkq\" (UID: \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\") " pod="openstack/dnsmasq-dns-666b6646f7-4czkq" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.419246 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4czkq\" (UID: \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\") " pod="openstack/dnsmasq-dns-666b6646f7-4czkq" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.460306 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5tbx\" (UniqueName: \"kubernetes.io/projected/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-kube-api-access-q5tbx\") pod \"dnsmasq-dns-666b6646f7-4czkq\" (UID: \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\") " pod="openstack/dnsmasq-dns-666b6646f7-4czkq" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.463282 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l7gzz"] Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.494713 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bg67g"] Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.501284 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4czkq" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.504295 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.558899 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bg67g"] Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.622436 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bg67g\" (UID: \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.622630 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-config\") pod \"dnsmasq-dns-57d769cc4f-bg67g\" (UID: \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.622678 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25v79\" (UniqueName: \"kubernetes.io/projected/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-kube-api-access-25v79\") pod \"dnsmasq-dns-57d769cc4f-bg67g\" (UID: \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.724074 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-config\") pod \"dnsmasq-dns-57d769cc4f-bg67g\" (UID: \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.724339 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25v79\" (UniqueName: \"kubernetes.io/projected/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-kube-api-access-25v79\") pod \"dnsmasq-dns-57d769cc4f-bg67g\" (UID: \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.724392 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bg67g\" (UID: \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.725360 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bg67g\" (UID: \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.725405 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-config\") pod \"dnsmasq-dns-57d769cc4f-bg67g\" (UID: \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.752875 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25v79\" (UniqueName: \"kubernetes.io/projected/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-kube-api-access-25v79\") pod \"dnsmasq-dns-57d769cc4f-bg67g\" (UID: \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\") " pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:20:12 crc kubenswrapper[5024]: I1128 17:20:12.845062 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.302867 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.304626 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.307704 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rl4vn" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.307848 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.307990 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.308332 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.308461 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.308621 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.309794 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.316494 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.409377 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4czkq"] Nov 28 17:20:13 crc kubenswrapper[5024]: W1128 17:20:13.419356 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9a33b1_4a6d_46f2_a251_d4f75fa7171d.slice/crio-d4e2e65e75a1be9bc126cba4b79a5d1ec0c2fb3790ef9803967ba4b20aeb16a3 WatchSource:0}: Error finding container d4e2e65e75a1be9bc126cba4b79a5d1ec0c2fb3790ef9803967ba4b20aeb16a3: Status 404 returned error can't find the container with id d4e2e65e75a1be9bc126cba4b79a5d1ec0c2fb3790ef9803967ba4b20aeb16a3 Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.472277 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.472624 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.472653 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.472714 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.472734 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.472762 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a996fd8-35ac-41d9-a490-71dc31fa0686-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.472787 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.472832 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvsx4\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-kube-api-access-zvsx4\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.472849 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.472866 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.472895 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a996fd8-35ac-41d9-a490-71dc31fa0686-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.574707 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a996fd8-35ac-41d9-a490-71dc31fa0686-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.574798 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.574858 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.574897 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.574992 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.575042 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.575080 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a996fd8-35ac-41d9-a490-71dc31fa0686-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.575118 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.575184 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvsx4\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-kube-api-access-zvsx4\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.575209 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.575229 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.576691 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.577310 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.577310 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.577978 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.578011 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.578915 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.582967 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.583556 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a996fd8-35ac-41d9-a490-71dc31fa0686-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.587516 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a996fd8-35ac-41d9-a490-71dc31fa0686-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.598396 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvsx4\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-kube-api-access-zvsx4\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.617258 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.623496 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.627038 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.629552 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.637951 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.639508 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.643109 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.643858 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.644252 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.646575 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tvj4k" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.646642 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.646583 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.684084 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.721256 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bg67g"] Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.750036 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4czkq" event={"ID":"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d","Type":"ContainerStarted","Data":"d4e2e65e75a1be9bc126cba4b79a5d1ec0c2fb3790ef9803967ba4b20aeb16a3"} Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.779938 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.780002 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.782251 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.782305 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.782335 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.782365 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.782391 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krsc7\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-kube-api-access-krsc7\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.782499 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.782818 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77c4107c-2b4b-46f2-bf47-ccf384504fb1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.782936 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77c4107c-2b4b-46f2-bf47-ccf384504fb1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.782998 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.889468 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.890004 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.890002 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.890364 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.890454 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.890508 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.890533 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.890575 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.890606 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krsc7\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-kube-api-access-krsc7\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.890769 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.890816 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77c4107c-2b4b-46f2-bf47-ccf384504fb1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.890923 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77c4107c-2b4b-46f2-bf47-ccf384504fb1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.891167 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.891233 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.894269 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.894800 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.895178 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.895209 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.901279 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77c4107c-2b4b-46f2-bf47-ccf384504fb1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.903327 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77c4107c-2b4b-46f2-bf47-ccf384504fb1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.910119 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.914861 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krsc7\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-kube-api-access-krsc7\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.931469 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:13 crc kubenswrapper[5024]: I1128 17:20:13.995403 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:20:14 crc kubenswrapper[5024]: I1128 17:20:14.268982 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:20:14 crc kubenswrapper[5024]: I1128 17:20:14.635755 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:20:14 crc kubenswrapper[5024]: I1128 17:20:14.767249 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" event={"ID":"c9dff956-8c29-446a-b6a9-f64ec4ea58b2","Type":"ContainerStarted","Data":"9db438f6bbecabe1dcf8aae6c6d61b2717866d9fe8599d47103d5ea05fcaf8fc"} Nov 28 17:20:14 crc kubenswrapper[5024]: I1128 17:20:14.770974 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a996fd8-35ac-41d9-a490-71dc31fa0686","Type":"ContainerStarted","Data":"441a536cbc861803f5928c6671a3a0177140c907f0f10a4da7b17925a0dea82f"} Nov 28 17:20:14 crc kubenswrapper[5024]: I1128 17:20:14.773186 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"77c4107c-2b4b-46f2-bf47-ccf384504fb1","Type":"ContainerStarted","Data":"cefbf33eb3799f04361bb7c6cc2517ff004a2fb67263fb303c6defc7d329ab7c"} Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.121884 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.124154 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.133936 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.135581 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4jp78" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.136301 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.136371 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.138286 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.140776 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.232999 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-operator-scripts\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.233135 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.233164 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-config-data-default\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.233200 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.233230 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-kolla-config\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.233278 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-config-data-generated\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.233308 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.233363 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgd4f\" (UniqueName: \"kubernetes.io/projected/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-kube-api-access-mgd4f\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.341636 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-operator-scripts\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.342041 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.342077 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-config-data-default\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.342113 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.342136 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-kolla-config\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.342192 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-config-data-generated\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.342226 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.342298 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgd4f\" (UniqueName: \"kubernetes.io/projected/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-kube-api-access-mgd4f\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.343443 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-kolla-config\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.343705 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-operator-scripts\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.344092 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.346097 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-config-data-generated\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.346856 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-config-data-default\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.362106 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.362256 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.393012 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgd4f\" (UniqueName: \"kubernetes.io/projected/89e70753-1dcf-4ff8-8859-5bd6d55cbe47-kube-api-access-mgd4f\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.393034 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"89e70753-1dcf-4ff8-8859-5bd6d55cbe47\") " pod="openstack/openstack-galera-0" Nov 28 17:20:15 crc kubenswrapper[5024]: I1128 17:20:15.447940 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.556234 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.558294 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.561805 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.562529 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.562853 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.563125 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-lnlqt" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.572608 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.679984 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.680910 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtnsz\" (UniqueName: \"kubernetes.io/projected/27bdb46e-71e8-41d7-b796-b10d95025f95-kube-api-access-qtnsz\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.680939 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bdb46e-71e8-41d7-b796-b10d95025f95-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.680998 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/27bdb46e-71e8-41d7-b796-b10d95025f95-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.681069 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/27bdb46e-71e8-41d7-b796-b10d95025f95-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.681136 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27bdb46e-71e8-41d7-b796-b10d95025f95-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.681215 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/27bdb46e-71e8-41d7-b796-b10d95025f95-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.681254 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/27bdb46e-71e8-41d7-b796-b10d95025f95-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.782587 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/27bdb46e-71e8-41d7-b796-b10d95025f95-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.782658 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/27bdb46e-71e8-41d7-b796-b10d95025f95-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.782698 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27bdb46e-71e8-41d7-b796-b10d95025f95-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.782765 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/27bdb46e-71e8-41d7-b796-b10d95025f95-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.782792 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/27bdb46e-71e8-41d7-b796-b10d95025f95-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.782839 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.782874 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtnsz\" (UniqueName: \"kubernetes.io/projected/27bdb46e-71e8-41d7-b796-b10d95025f95-kube-api-access-qtnsz\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.782890 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bdb46e-71e8-41d7-b796-b10d95025f95-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.787222 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bdb46e-71e8-41d7-b796-b10d95025f95-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.788068 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/27bdb46e-71e8-41d7-b796-b10d95025f95-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.788521 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/27bdb46e-71e8-41d7-b796-b10d95025f95-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.789557 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27bdb46e-71e8-41d7-b796-b10d95025f95-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.794524 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.794931 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/27bdb46e-71e8-41d7-b796-b10d95025f95-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.796463 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/27bdb46e-71e8-41d7-b796-b10d95025f95-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.813531 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtnsz\" (UniqueName: \"kubernetes.io/projected/27bdb46e-71e8-41d7-b796-b10d95025f95-kube-api-access-qtnsz\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.832186 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"27bdb46e-71e8-41d7-b796-b10d95025f95\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.893109 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.900339 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.902114 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.904841 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-xfm4s" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.904912 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.908800 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.914666 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.988153 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe32246-2e6f-47af-85ae-ea93f6e05037-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.988207 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h75m5\" (UniqueName: \"kubernetes.io/projected/1fe32246-2e6f-47af-85ae-ea93f6e05037-kube-api-access-h75m5\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.988254 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fe32246-2e6f-47af-85ae-ea93f6e05037-kolla-config\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.988516 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fe32246-2e6f-47af-85ae-ea93f6e05037-config-data\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:16 crc kubenswrapper[5024]: I1128 17:20:16.988595 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fe32246-2e6f-47af-85ae-ea93f6e05037-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:17 crc kubenswrapper[5024]: I1128 17:20:17.091001 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fe32246-2e6f-47af-85ae-ea93f6e05037-config-data\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:17 crc kubenswrapper[5024]: I1128 17:20:17.091067 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fe32246-2e6f-47af-85ae-ea93f6e05037-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:17 crc kubenswrapper[5024]: I1128 17:20:17.091152 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe32246-2e6f-47af-85ae-ea93f6e05037-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:17 crc kubenswrapper[5024]: I1128 17:20:17.091181 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h75m5\" (UniqueName: \"kubernetes.io/projected/1fe32246-2e6f-47af-85ae-ea93f6e05037-kube-api-access-h75m5\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:17 crc kubenswrapper[5024]: I1128 17:20:17.091221 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fe32246-2e6f-47af-85ae-ea93f6e05037-kolla-config\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:17 crc kubenswrapper[5024]: I1128 17:20:17.096066 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fe32246-2e6f-47af-85ae-ea93f6e05037-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:17 crc kubenswrapper[5024]: I1128 17:20:17.096279 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fe32246-2e6f-47af-85ae-ea93f6e05037-kolla-config\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:17 crc kubenswrapper[5024]: I1128 17:20:17.096512 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fe32246-2e6f-47af-85ae-ea93f6e05037-config-data\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:17 crc kubenswrapper[5024]: I1128 17:20:17.101760 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe32246-2e6f-47af-85ae-ea93f6e05037-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:17 crc kubenswrapper[5024]: I1128 17:20:17.108568 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h75m5\" (UniqueName: \"kubernetes.io/projected/1fe32246-2e6f-47af-85ae-ea93f6e05037-kube-api-access-h75m5\") pod \"memcached-0\" (UID: \"1fe32246-2e6f-47af-85ae-ea93f6e05037\") " pod="openstack/memcached-0" Nov 28 17:20:17 crc kubenswrapper[5024]: I1128 17:20:17.226817 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.023048 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.029855 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.053770 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wpv56" Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.059049 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.162033 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l6wx\" (UniqueName: \"kubernetes.io/projected/c48cac67-542a-4982-98f3-19161065f4fc-kube-api-access-4l6wx\") pod \"kube-state-metrics-0\" (UID: \"c48cac67-542a-4982-98f3-19161065f4fc\") " pod="openstack/kube-state-metrics-0" Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.266240 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l6wx\" (UniqueName: \"kubernetes.io/projected/c48cac67-542a-4982-98f3-19161065f4fc-kube-api-access-4l6wx\") pod \"kube-state-metrics-0\" (UID: \"c48cac67-542a-4982-98f3-19161065f4fc\") " pod="openstack/kube-state-metrics-0" Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.303142 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l6wx\" (UniqueName: \"kubernetes.io/projected/c48cac67-542a-4982-98f3-19161065f4fc-kube-api-access-4l6wx\") pod \"kube-state-metrics-0\" (UID: \"c48cac67-542a-4982-98f3-19161065f4fc\") " pod="openstack/kube-state-metrics-0" Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.397938 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.686926 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp"] Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.688477 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp" Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.693602 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-n7gx6" Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.693826 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.703111 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp"] Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.780211 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8e43901-e042-4b90-81ed-194c512d9a90-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-wp9mp\" (UID: \"d8e43901-e042-4b90-81ed-194c512d9a90\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp" Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.780415 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncxt9\" (UniqueName: \"kubernetes.io/projected/d8e43901-e042-4b90-81ed-194c512d9a90-kube-api-access-ncxt9\") pod \"observability-ui-dashboards-7d5fb4cbfb-wp9mp\" (UID: \"d8e43901-e042-4b90-81ed-194c512d9a90\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp" Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.882401 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncxt9\" (UniqueName: \"kubernetes.io/projected/d8e43901-e042-4b90-81ed-194c512d9a90-kube-api-access-ncxt9\") pod \"observability-ui-dashboards-7d5fb4cbfb-wp9mp\" (UID: \"d8e43901-e042-4b90-81ed-194c512d9a90\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp" Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.882567 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8e43901-e042-4b90-81ed-194c512d9a90-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-wp9mp\" (UID: \"d8e43901-e042-4b90-81ed-194c512d9a90\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp" Nov 28 17:20:19 crc kubenswrapper[5024]: E1128 17:20:19.882698 5024 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Nov 28 17:20:19 crc kubenswrapper[5024]: E1128 17:20:19.882741 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8e43901-e042-4b90-81ed-194c512d9a90-serving-cert podName:d8e43901-e042-4b90-81ed-194c512d9a90 nodeName:}" failed. No retries permitted until 2025-11-28 17:20:20.382725674 +0000 UTC m=+1322.431646579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d8e43901-e042-4b90-81ed-194c512d9a90-serving-cert") pod "observability-ui-dashboards-7d5fb4cbfb-wp9mp" (UID: "d8e43901-e042-4b90-81ed-194c512d9a90") : secret "observability-ui-dashboards" not found Nov 28 17:20:19 crc kubenswrapper[5024]: I1128 17:20:19.912338 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncxt9\" (UniqueName: \"kubernetes.io/projected/d8e43901-e042-4b90-81ed-194c512d9a90-kube-api-access-ncxt9\") pod \"observability-ui-dashboards-7d5fb4cbfb-wp9mp\" (UID: \"d8e43901-e042-4b90-81ed-194c512d9a90\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.116556 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6dc4c5dd4b-6c2q9"] Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.118087 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.150277 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6dc4c5dd4b-6c2q9"] Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.188538 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dffe34a1-60ac-4513-a512-5d42cb858486-service-ca\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.188623 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dffe34a1-60ac-4513-a512-5d42cb858486-oauth-serving-cert\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.188656 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dffe34a1-60ac-4513-a512-5d42cb858486-console-serving-cert\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.188739 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dffe34a1-60ac-4513-a512-5d42cb858486-console-config\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.188761 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk5k4\" (UniqueName: \"kubernetes.io/projected/dffe34a1-60ac-4513-a512-5d42cb858486-kube-api-access-gk5k4\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.188781 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dffe34a1-60ac-4513-a512-5d42cb858486-trusted-ca-bundle\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.188878 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dffe34a1-60ac-4513-a512-5d42cb858486-console-oauth-config\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.291375 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dffe34a1-60ac-4513-a512-5d42cb858486-oauth-serving-cert\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.291442 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dffe34a1-60ac-4513-a512-5d42cb858486-console-serving-cert\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.291567 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dffe34a1-60ac-4513-a512-5d42cb858486-console-config\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.291598 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk5k4\" (UniqueName: \"kubernetes.io/projected/dffe34a1-60ac-4513-a512-5d42cb858486-kube-api-access-gk5k4\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.291629 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dffe34a1-60ac-4513-a512-5d42cb858486-trusted-ca-bundle\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.291742 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dffe34a1-60ac-4513-a512-5d42cb858486-console-oauth-config\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.291799 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dffe34a1-60ac-4513-a512-5d42cb858486-service-ca\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.292679 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dffe34a1-60ac-4513-a512-5d42cb858486-console-config\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.293641 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dffe34a1-60ac-4513-a512-5d42cb858486-service-ca\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.293769 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dffe34a1-60ac-4513-a512-5d42cb858486-trusted-ca-bundle\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.294598 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dffe34a1-60ac-4513-a512-5d42cb858486-oauth-serving-cert\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.299704 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dffe34a1-60ac-4513-a512-5d42cb858486-console-oauth-config\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.317400 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dffe34a1-60ac-4513-a512-5d42cb858486-console-serving-cert\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.336581 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk5k4\" (UniqueName: \"kubernetes.io/projected/dffe34a1-60ac-4513-a512-5d42cb858486-kube-api-access-gk5k4\") pod \"console-6dc4c5dd4b-6c2q9\" (UID: \"dffe34a1-60ac-4513-a512-5d42cb858486\") " pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.352747 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.365345 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.376524 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.376733 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.377004 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.377887 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-hk4n6" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.378148 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.378291 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.379914 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.396844 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8e43901-e042-4b90-81ed-194c512d9a90-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-wp9mp\" (UID: \"d8e43901-e042-4b90-81ed-194c512d9a90\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.402493 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8e43901-e042-4b90-81ed-194c512d9a90-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-wp9mp\" (UID: \"d8e43901-e042-4b90-81ed-194c512d9a90\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.451210 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.505876 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctgv8\" (UniqueName: \"kubernetes.io/projected/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-kube-api-access-ctgv8\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.505991 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.506163 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.506357 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.506424 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.506487 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.506520 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.506567 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-config\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.608665 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.608724 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.608755 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.608787 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.608821 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-config\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.608919 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctgv8\" (UniqueName: \"kubernetes.io/projected/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-kube-api-access-ctgv8\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.608999 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.609093 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.609914 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.609106 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.613358 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.614365 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.614481 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.629429 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-config\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.632678 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.639622 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.640903 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.643897 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctgv8\" (UniqueName: \"kubernetes.io/projected/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-kube-api-access-ctgv8\") pod \"prometheus-metric-storage-0\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:20 crc kubenswrapper[5024]: I1128 17:20:20.764958 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.810203 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gwmd4"] Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.811958 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.815412 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.815744 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-l6d4j" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.815909 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.818180 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-tst7t"] Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.820883 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.827670 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gwmd4"] Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.853873 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tst7t"] Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.889587 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b7387769-8164-4608-aa9a-51bf86870cad-etc-ovs\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.889651 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/50b88778-9829-4418-bfc4-a7377039d584-var-run\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.889702 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/50b88778-9829-4418-bfc4-a7377039d584-var-log-ovn\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.889728 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/50b88778-9829-4418-bfc4-a7377039d584-ovn-controller-tls-certs\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.889859 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50b88778-9829-4418-bfc4-a7377039d584-scripts\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.889924 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h2jw\" (UniqueName: \"kubernetes.io/projected/50b88778-9829-4418-bfc4-a7377039d584-kube-api-access-2h2jw\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.889970 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b7387769-8164-4608-aa9a-51bf86870cad-var-log\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.889989 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b7387769-8164-4608-aa9a-51bf86870cad-var-lib\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.891740 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b7387769-8164-4608-aa9a-51bf86870cad-var-run\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.891941 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/50b88778-9829-4418-bfc4-a7377039d584-var-run-ovn\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.892009 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7387769-8164-4608-aa9a-51bf86870cad-scripts\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.892132 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b88778-9829-4418-bfc4-a7377039d584-combined-ca-bundle\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.892192 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnnvk\" (UniqueName: \"kubernetes.io/projected/b7387769-8164-4608-aa9a-51bf86870cad-kube-api-access-lnnvk\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.995586 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b88778-9829-4418-bfc4-a7377039d584-combined-ca-bundle\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.995690 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnnvk\" (UniqueName: \"kubernetes.io/projected/b7387769-8164-4608-aa9a-51bf86870cad-kube-api-access-lnnvk\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.995866 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b7387769-8164-4608-aa9a-51bf86870cad-etc-ovs\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.995922 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/50b88778-9829-4418-bfc4-a7377039d584-var-run\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.996008 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/50b88778-9829-4418-bfc4-a7377039d584-var-log-ovn\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.996073 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/50b88778-9829-4418-bfc4-a7377039d584-ovn-controller-tls-certs\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.996119 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50b88778-9829-4418-bfc4-a7377039d584-scripts\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.996149 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h2jw\" (UniqueName: \"kubernetes.io/projected/50b88778-9829-4418-bfc4-a7377039d584-kube-api-access-2h2jw\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.996184 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b7387769-8164-4608-aa9a-51bf86870cad-var-log\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.996206 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b7387769-8164-4608-aa9a-51bf86870cad-var-lib\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.996243 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b7387769-8164-4608-aa9a-51bf86870cad-var-run\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.996290 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/50b88778-9829-4418-bfc4-a7377039d584-var-run-ovn\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:23 crc kubenswrapper[5024]: I1128 17:20:23.996317 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7387769-8164-4608-aa9a-51bf86870cad-scripts\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.000294 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b7387769-8164-4608-aa9a-51bf86870cad-var-lib\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.000468 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b7387769-8164-4608-aa9a-51bf86870cad-etc-ovs\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.000509 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b7387769-8164-4608-aa9a-51bf86870cad-var-run\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.000557 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/50b88778-9829-4418-bfc4-a7377039d584-var-run\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.000678 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/50b88778-9829-4418-bfc4-a7377039d584-var-run-ovn\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.000719 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/50b88778-9829-4418-bfc4-a7377039d584-var-log-ovn\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.001374 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b7387769-8164-4608-aa9a-51bf86870cad-var-log\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.002768 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50b88778-9829-4418-bfc4-a7377039d584-scripts\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.002843 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7387769-8164-4608-aa9a-51bf86870cad-scripts\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.006599 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/50b88778-9829-4418-bfc4-a7377039d584-ovn-controller-tls-certs\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.008241 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b88778-9829-4418-bfc4-a7377039d584-combined-ca-bundle\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.018249 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h2jw\" (UniqueName: \"kubernetes.io/projected/50b88778-9829-4418-bfc4-a7377039d584-kube-api-access-2h2jw\") pod \"ovn-controller-gwmd4\" (UID: \"50b88778-9829-4418-bfc4-a7377039d584\") " pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.025829 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnnvk\" (UniqueName: \"kubernetes.io/projected/b7387769-8164-4608-aa9a-51bf86870cad-kube-api-access-lnnvk\") pod \"ovn-controller-ovs-tst7t\" (UID: \"b7387769-8164-4608-aa9a-51bf86870cad\") " pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.037292 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.041942 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.046010 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.046528 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-dwvzx" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.046737 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.046969 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.047173 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.052574 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.098249 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.098744 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620e671a-94a6-4ebb-807d-88c062028090-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.098785 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4dj9\" (UniqueName: \"kubernetes.io/projected/620e671a-94a6-4ebb-807d-88c062028090-kube-api-access-b4dj9\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.099540 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/620e671a-94a6-4ebb-807d-88c062028090-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.099590 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/620e671a-94a6-4ebb-807d-88c062028090-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.099661 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/620e671a-94a6-4ebb-807d-88c062028090-config\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.099743 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/620e671a-94a6-4ebb-807d-88c062028090-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.099831 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/620e671a-94a6-4ebb-807d-88c062028090-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.138102 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.148784 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.203795 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.203879 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620e671a-94a6-4ebb-807d-88c062028090-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.203909 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4dj9\" (UniqueName: \"kubernetes.io/projected/620e671a-94a6-4ebb-807d-88c062028090-kube-api-access-b4dj9\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.203950 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/620e671a-94a6-4ebb-807d-88c062028090-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.203971 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/620e671a-94a6-4ebb-807d-88c062028090-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.204006 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/620e671a-94a6-4ebb-807d-88c062028090-config\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.204066 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/620e671a-94a6-4ebb-807d-88c062028090-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.204109 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/620e671a-94a6-4ebb-807d-88c062028090-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.204272 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.205181 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/620e671a-94a6-4ebb-807d-88c062028090-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.205513 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/620e671a-94a6-4ebb-807d-88c062028090-config\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.205921 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/620e671a-94a6-4ebb-807d-88c062028090-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.208330 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620e671a-94a6-4ebb-807d-88c062028090-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.209440 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/620e671a-94a6-4ebb-807d-88c062028090-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.226791 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/620e671a-94a6-4ebb-807d-88c062028090-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.230927 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4dj9\" (UniqueName: \"kubernetes.io/projected/620e671a-94a6-4ebb-807d-88c062028090-kube-api-access-b4dj9\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.237853 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"620e671a-94a6-4ebb-807d-88c062028090\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:24 crc kubenswrapper[5024]: I1128 17:20:24.408093 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.274328 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.280374 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.290303 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.290367 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.290429 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-jmlzg" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.290475 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.295660 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.448774 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.448865 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.448891 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.448974 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.449726 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.449760 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.449792 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2rl9\" (UniqueName: \"kubernetes.io/projected/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-kube-api-access-x2rl9\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.449822 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.552038 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.552097 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.552161 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.552246 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.552279 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.552306 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2rl9\" (UniqueName: \"kubernetes.io/projected/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-kube-api-access-x2rl9\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.552335 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.552445 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.553803 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.555416 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-config\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.555569 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.556260 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.559874 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.577708 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.579612 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.584484 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2rl9\" (UniqueName: \"kubernetes.io/projected/67f2019a-e1ff-46c7-9ec9-a1762e83f1c1-kube-api-access-x2rl9\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.603711 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:26 crc kubenswrapper[5024]: I1128 17:20:26.613879 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:34 crc kubenswrapper[5024]: E1128 17:20:34.375148 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 17:20:34 crc kubenswrapper[5024]: E1128 17:20:34.376162 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgcg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-gx9kn_openstack(ef7b3aae-0376-47da-a875-80861382c90c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:20:34 crc kubenswrapper[5024]: E1128 17:20:34.377323 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-gx9kn" podUID="ef7b3aae-0376-47da-a875-80861382c90c" Nov 28 17:20:35 crc kubenswrapper[5024]: E1128 17:20:35.910352 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 17:20:35 crc kubenswrapper[5024]: E1128 17:20:35.910994 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x5hrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-l7gzz_openstack(29e825cb-cc43-43cc-9b9d-f376e964c371): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:20:35 crc kubenswrapper[5024]: E1128 17:20:35.912264 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" podUID="29e825cb-cc43-43cc-9b9d-f376e964c371" Nov 28 17:20:36 crc kubenswrapper[5024]: E1128 17:20:36.532143 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 17:20:36 crc kubenswrapper[5024]: E1128 17:20:36.532300 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5tbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-4czkq_openstack(8f9a33b1-4a6d-46f2-a251-d4f75fa7171d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:20:36 crc kubenswrapper[5024]: E1128 17:20:36.533923 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-4czkq" podUID="8f9a33b1-4a6d-46f2-a251-d4f75fa7171d" Nov 28 17:20:37 crc kubenswrapper[5024]: E1128 17:20:37.023498 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-4czkq" podUID="8f9a33b1-4a6d-46f2-a251-d4f75fa7171d" Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.407948 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gx9kn" Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.458280 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.508899 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7b3aae-0376-47da-a875-80861382c90c-config\") pod \"ef7b3aae-0376-47da-a875-80861382c90c\" (UID: \"ef7b3aae-0376-47da-a875-80861382c90c\") " Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.509010 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgcg5\" (UniqueName: \"kubernetes.io/projected/ef7b3aae-0376-47da-a875-80861382c90c-kube-api-access-wgcg5\") pod \"ef7b3aae-0376-47da-a875-80861382c90c\" (UID: \"ef7b3aae-0376-47da-a875-80861382c90c\") " Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.513509 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef7b3aae-0376-47da-a875-80861382c90c-config" (OuterVolumeSpecName: "config") pod "ef7b3aae-0376-47da-a875-80861382c90c" (UID: "ef7b3aae-0376-47da-a875-80861382c90c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.521967 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7b3aae-0376-47da-a875-80861382c90c-kube-api-access-wgcg5" (OuterVolumeSpecName: "kube-api-access-wgcg5") pod "ef7b3aae-0376-47da-a875-80861382c90c" (UID: "ef7b3aae-0376-47da-a875-80861382c90c"). InnerVolumeSpecName "kube-api-access-wgcg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.564820 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.564870 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.612043 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e825cb-cc43-43cc-9b9d-f376e964c371-config\") pod \"29e825cb-cc43-43cc-9b9d-f376e964c371\" (UID: \"29e825cb-cc43-43cc-9b9d-f376e964c371\") " Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.612101 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e825cb-cc43-43cc-9b9d-f376e964c371-dns-svc\") pod \"29e825cb-cc43-43cc-9b9d-f376e964c371\" (UID: \"29e825cb-cc43-43cc-9b9d-f376e964c371\") " Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.612279 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5hrb\" (UniqueName: \"kubernetes.io/projected/29e825cb-cc43-43cc-9b9d-f376e964c371-kube-api-access-x5hrb\") pod \"29e825cb-cc43-43cc-9b9d-f376e964c371\" (UID: \"29e825cb-cc43-43cc-9b9d-f376e964c371\") " Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.612999 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7b3aae-0376-47da-a875-80861382c90c-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.613032 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgcg5\" (UniqueName: \"kubernetes.io/projected/ef7b3aae-0376-47da-a875-80861382c90c-kube-api-access-wgcg5\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.615611 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e825cb-cc43-43cc-9b9d-f376e964c371-config" (OuterVolumeSpecName: "config") pod "29e825cb-cc43-43cc-9b9d-f376e964c371" (UID: "29e825cb-cc43-43cc-9b9d-f376e964c371"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.616110 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e825cb-cc43-43cc-9b9d-f376e964c371-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29e825cb-cc43-43cc-9b9d-f376e964c371" (UID: "29e825cb-cc43-43cc-9b9d-f376e964c371"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.617335 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e825cb-cc43-43cc-9b9d-f376e964c371-kube-api-access-x5hrb" (OuterVolumeSpecName: "kube-api-access-x5hrb") pod "29e825cb-cc43-43cc-9b9d-f376e964c371" (UID: "29e825cb-cc43-43cc-9b9d-f376e964c371"). InnerVolumeSpecName "kube-api-access-x5hrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.715119 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5hrb\" (UniqueName: \"kubernetes.io/projected/29e825cb-cc43-43cc-9b9d-f376e964c371-kube-api-access-x5hrb\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.715425 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e825cb-cc43-43cc-9b9d-f376e964c371-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.715438 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29e825cb-cc43-43cc-9b9d-f376e964c371-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.899932 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp"] Nov 28 17:20:37 crc kubenswrapper[5024]: I1128 17:20:37.956271 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.032738 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-gx9kn" event={"ID":"ef7b3aae-0376-47da-a875-80861382c90c","Type":"ContainerDied","Data":"6ed3e0c173bf1a791cd48e829774ab16a54e265c5826e1227923b703290ea1b9"} Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.032806 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gx9kn" Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.034363 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" event={"ID":"29e825cb-cc43-43cc-9b9d-f376e964c371","Type":"ContainerDied","Data":"f24b14c3bcd9f64414079053e289a12b93e0affb52cdf085d9a68d65b4374a05"} Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.034652 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l7gzz" Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.045812 5024 generic.go:334] "Generic (PLEG): container finished" podID="c9dff956-8c29-446a-b6a9-f64ec4ea58b2" containerID="c04b8e3752a2dcdf3f616dc2040e1ca59520d2b94cceac4080c744d5a8dbfef1" exitCode=0 Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.045879 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" event={"ID":"c9dff956-8c29-446a-b6a9-f64ec4ea58b2","Type":"ContainerDied","Data":"c04b8e3752a2dcdf3f616dc2040e1ca59520d2b94cceac4080c744d5a8dbfef1"} Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.046678 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp" event={"ID":"d8e43901-e042-4b90-81ed-194c512d9a90","Type":"ContainerStarted","Data":"d9d455b69b7f388803b959b628a50125ec2b64d50331d00e26b614c8e3e25dff"} Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.049950 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"620e671a-94a6-4ebb-807d-88c062028090","Type":"ContainerStarted","Data":"85c039f3e7ad2f3cdc7da66b7606da5bea46caa4a2754b6290d79c4dc313d952"} Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.153331 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gx9kn"] Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.164783 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gx9kn"] Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.190214 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l7gzz"] Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.199116 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l7gzz"] Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.237316 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.246481 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6dc4c5dd4b-6c2q9"] Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.272123 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.282166 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 17:20:38 crc kubenswrapper[5024]: W1128 17:20:38.388072 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fe32246_2e6f_47af_85ae_ea93f6e05037.slice/crio-53f520a11d84c538a4bbdc673197289be569c784f8b4314eddccb63721bfd584 WatchSource:0}: Error finding container 53f520a11d84c538a4bbdc673197289be569c784f8b4314eddccb63721bfd584: Status 404 returned error can't find the container with id 53f520a11d84c538a4bbdc673197289be569c784f8b4314eddccb63721bfd584 Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.452520 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tst7t"] Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.513525 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e825cb-cc43-43cc-9b9d-f376e964c371" path="/var/lib/kubelet/pods/29e825cb-cc43-43cc-9b9d-f376e964c371/volumes" Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.514605 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef7b3aae-0376-47da-a875-80861382c90c" path="/var/lib/kubelet/pods/ef7b3aae-0376-47da-a875-80861382c90c/volumes" Nov 28 17:20:38 crc kubenswrapper[5024]: W1128 17:20:38.684351 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a8a5d6d_4404_4848_a8b9_d47cee1e350d.slice/crio-ea490dcf90950e7b3891033eb4128bd645aca732cd3dd683ab9f4f39301b15b6 WatchSource:0}: Error finding container ea490dcf90950e7b3891033eb4128bd645aca732cd3dd683ab9f4f39301b15b6: Status 404 returned error can't find the container with id ea490dcf90950e7b3891033eb4128bd645aca732cd3dd683ab9f4f39301b15b6 Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.700381 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 17:20:38 crc kubenswrapper[5024]: W1128 17:20:38.707556 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89e70753_1dcf_4ff8_8859_5bd6d55cbe47.slice/crio-172b681b5ce63d78f13d1f8f11366b71cdb01db2bb939558e2248d301d937c80 WatchSource:0}: Error finding container 172b681b5ce63d78f13d1f8f11366b71cdb01db2bb939558e2248d301d937c80: Status 404 returned error can't find the container with id 172b681b5ce63d78f13d1f8f11366b71cdb01db2bb939558e2248d301d937c80 Nov 28 17:20:38 crc kubenswrapper[5024]: W1128 17:20:38.713692 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50b88778_9829_4418_bfc4_a7377039d584.slice/crio-46c7442b8882e46001bfb4da46a49f65b28081b54de97c0500dfe202d0c7b8f8 WatchSource:0}: Error finding container 46c7442b8882e46001bfb4da46a49f65b28081b54de97c0500dfe202d0c7b8f8: Status 404 returned error can't find the container with id 46c7442b8882e46001bfb4da46a49f65b28081b54de97c0500dfe202d0c7b8f8 Nov 28 17:20:38 crc kubenswrapper[5024]: W1128 17:20:38.717976 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27bdb46e_71e8_41d7_b796_b10d95025f95.slice/crio-27941e1f631a55ecea4c88c33578dd29d0bff1316b588df84a75b0539b31e3ce WatchSource:0}: Error finding container 27941e1f631a55ecea4c88c33578dd29d0bff1316b588df84a75b0539b31e3ce: Status 404 returned error can't find the container with id 27941e1f631a55ecea4c88c33578dd29d0bff1316b588df84a75b0539b31e3ce Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.722350 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.743900 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gwmd4"] Nov 28 17:20:38 crc kubenswrapper[5024]: I1128 17:20:38.763909 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 17:20:39 crc kubenswrapper[5024]: I1128 17:20:39.139337 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"89e70753-1dcf-4ff8-8859-5bd6d55cbe47","Type":"ContainerStarted","Data":"172b681b5ce63d78f13d1f8f11366b71cdb01db2bb939558e2248d301d937c80"} Nov 28 17:20:39 crc kubenswrapper[5024]: I1128 17:20:39.147898 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tst7t" event={"ID":"b7387769-8164-4608-aa9a-51bf86870cad","Type":"ContainerStarted","Data":"f067b9eda0764508c79e800f03c3b67671961bae5dce1f4b2fdd7fe8217881bc"} Nov 28 17:20:39 crc kubenswrapper[5024]: I1128 17:20:39.187552 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6dc4c5dd4b-6c2q9" event={"ID":"dffe34a1-60ac-4513-a512-5d42cb858486","Type":"ContainerStarted","Data":"4e73d7d322633e789a3aec1fc5beffd7c1fd466378a98fb700a6dc0b33f26251"} Nov 28 17:20:39 crc kubenswrapper[5024]: I1128 17:20:39.187624 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6dc4c5dd4b-6c2q9" event={"ID":"dffe34a1-60ac-4513-a512-5d42cb858486","Type":"ContainerStarted","Data":"fc38756f7a9cd82041d4be0f4365a3fad9b04cb8a3b700bee1dd14f3f3b98e3f"} Nov 28 17:20:39 crc kubenswrapper[5024]: I1128 17:20:39.202320 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"27bdb46e-71e8-41d7-b796-b10d95025f95","Type":"ContainerStarted","Data":"27941e1f631a55ecea4c88c33578dd29d0bff1316b588df84a75b0539b31e3ce"} Nov 28 17:20:39 crc kubenswrapper[5024]: I1128 17:20:39.210531 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1fe32246-2e6f-47af-85ae-ea93f6e05037","Type":"ContainerStarted","Data":"53f520a11d84c538a4bbdc673197289be569c784f8b4314eddccb63721bfd584"} Nov 28 17:20:39 crc kubenswrapper[5024]: I1128 17:20:39.242845 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a8a5d6d-4404-4848-a8b9-d47cee1e350d","Type":"ContainerStarted","Data":"ea490dcf90950e7b3891033eb4128bd645aca732cd3dd683ab9f4f39301b15b6"} Nov 28 17:20:39 crc kubenswrapper[5024]: I1128 17:20:39.244828 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gwmd4" event={"ID":"50b88778-9829-4418-bfc4-a7377039d584","Type":"ContainerStarted","Data":"46c7442b8882e46001bfb4da46a49f65b28081b54de97c0500dfe202d0c7b8f8"} Nov 28 17:20:39 crc kubenswrapper[5024]: I1128 17:20:39.247846 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1","Type":"ContainerStarted","Data":"eb4551117c196267e87684c6ff942a9af151b909c8b464cf706601b3ac9524d9"} Nov 28 17:20:39 crc kubenswrapper[5024]: I1128 17:20:39.252789 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" event={"ID":"c9dff956-8c29-446a-b6a9-f64ec4ea58b2","Type":"ContainerStarted","Data":"8565e775bfbde208e7f91d72747da59fe89a02a78eaaad0fa6a2248e38157fed"} Nov 28 17:20:39 crc kubenswrapper[5024]: I1128 17:20:39.253349 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:20:39 crc kubenswrapper[5024]: I1128 17:20:39.255867 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c48cac67-542a-4982-98f3-19161065f4fc","Type":"ContainerStarted","Data":"90f27a88dc71fbd067518ed68fab1cf74919129b34ba4dbd5f91530ef61045a6"} Nov 28 17:20:39 crc kubenswrapper[5024]: I1128 17:20:39.285466 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6dc4c5dd4b-6c2q9" podStartSLOduration=19.285445839 podStartE2EDuration="19.285445839s" podCreationTimestamp="2025-11-28 17:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:20:39.241436381 +0000 UTC m=+1341.290357286" watchObservedRunningTime="2025-11-28 17:20:39.285445839 +0000 UTC m=+1341.334366764" Nov 28 17:20:40 crc kubenswrapper[5024]: I1128 17:20:40.272775 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"77c4107c-2b4b-46f2-bf47-ccf384504fb1","Type":"ContainerStarted","Data":"c3b5a1aa90443da628b90d142e2f8a9bccbde23e09a695bbc71f26b48cf035f4"} Nov 28 17:20:40 crc kubenswrapper[5024]: I1128 17:20:40.276148 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a996fd8-35ac-41d9-a490-71dc31fa0686","Type":"ContainerStarted","Data":"2f6b28b4e0fe7ad569560c585bb13a5380c148687f58ad9278aaa037f4e7db11"} Nov 28 17:20:40 crc kubenswrapper[5024]: I1128 17:20:40.314198 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" podStartSLOduration=4.495790694 podStartE2EDuration="28.314158743s" podCreationTimestamp="2025-11-28 17:20:12 +0000 UTC" firstStartedPulling="2025-11-28 17:20:13.738917318 +0000 UTC m=+1315.787838223" lastFinishedPulling="2025-11-28 17:20:37.557285367 +0000 UTC m=+1339.606206272" observedRunningTime="2025-11-28 17:20:39.280006506 +0000 UTC m=+1341.328927411" watchObservedRunningTime="2025-11-28 17:20:40.314158743 +0000 UTC m=+1342.363079648" Nov 28 17:20:40 crc kubenswrapper[5024]: I1128 17:20:40.452798 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:40 crc kubenswrapper[5024]: I1128 17:20:40.452844 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:40 crc kubenswrapper[5024]: I1128 17:20:40.461914 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:41 crc kubenswrapper[5024]: I1128 17:20:41.291910 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6dc4c5dd4b-6c2q9" Nov 28 17:20:41 crc kubenswrapper[5024]: I1128 17:20:41.384839 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-78fdf7cd4f-99mvs"] Nov 28 17:20:47 crc kubenswrapper[5024]: I1128 17:20:47.847193 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:20:47 crc kubenswrapper[5024]: I1128 17:20:47.938836 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4czkq"] Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.425968 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1","Type":"ContainerStarted","Data":"ca3154bd82637c68d47977f908b55fb5f49441a54131ecc0a24a14f9d9f143ce"} Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.431210 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp" event={"ID":"d8e43901-e042-4b90-81ed-194c512d9a90","Type":"ContainerStarted","Data":"294dc38e3ceccea92a3203edc9883277c0f1d6f7492e20ee2099d30c0e77292d"} Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.439922 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.442419 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4czkq" Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.457741 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"620e671a-94a6-4ebb-807d-88c062028090","Type":"ContainerStarted","Data":"83b7429e6557a13aef7bf7e770a67da4e8b0c9ae8cd2d301d1b073cb59563224"} Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.463868 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gwmd4" Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.489428 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"27bdb46e-71e8-41d7-b796-b10d95025f95","Type":"ContainerStarted","Data":"3013cd720706d83a041823c54db1e9cb41a90f8545c5be70389acc1aefaa9630"} Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.494594 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-wp9mp" podStartSLOduration=20.471721031 podStartE2EDuration="29.494574727s" podCreationTimestamp="2025-11-28 17:20:19 +0000 UTC" firstStartedPulling="2025-11-28 17:20:37.905355857 +0000 UTC m=+1339.954276762" lastFinishedPulling="2025-11-28 17:20:46.928209553 +0000 UTC m=+1348.977130458" observedRunningTime="2025-11-28 17:20:48.474840348 +0000 UTC m=+1350.523761243" watchObservedRunningTime="2025-11-28 17:20:48.494574727 +0000 UTC m=+1350.543495632" Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.495238 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1fe32246-2e6f-47af-85ae-ea93f6e05037","Type":"ContainerStarted","Data":"d041fb3ce7ef1ab468f595ee99a738afd1b176fea13bf6b3a218a473989113ca"} Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.495669 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.529461 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gwmd4" podStartSLOduration=17.187701165 podStartE2EDuration="25.529439315s" podCreationTimestamp="2025-11-28 17:20:23 +0000 UTC" firstStartedPulling="2025-11-28 17:20:38.716486205 +0000 UTC m=+1340.765407110" lastFinishedPulling="2025-11-28 17:20:47.058224365 +0000 UTC m=+1349.107145260" observedRunningTime="2025-11-28 17:20:48.512112309 +0000 UTC m=+1350.561033224" watchObservedRunningTime="2025-11-28 17:20:48.529439315 +0000 UTC m=+1350.578360220" Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.536470 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-config\") pod \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\" (UID: \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\") " Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.537036 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-dns-svc\") pod \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\" (UID: \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\") " Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.537224 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5tbx\" (UniqueName: \"kubernetes.io/projected/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-kube-api-access-q5tbx\") pod \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\" (UID: \"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d\") " Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.537355 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-config" (OuterVolumeSpecName: "config") pod "8f9a33b1-4a6d-46f2-a251-d4f75fa7171d" (UID: "8f9a33b1-4a6d-46f2-a251-d4f75fa7171d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.537823 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8f9a33b1-4a6d-46f2-a251-d4f75fa7171d" (UID: "8f9a33b1-4a6d-46f2-a251-d4f75fa7171d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.538589 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.538712 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.545747 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-kube-api-access-q5tbx" (OuterVolumeSpecName: "kube-api-access-q5tbx") pod "8f9a33b1-4a6d-46f2-a251-d4f75fa7171d" (UID: "8f9a33b1-4a6d-46f2-a251-d4f75fa7171d"). InnerVolumeSpecName "kube-api-access-q5tbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.560436 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=21.177451937 podStartE2EDuration="30.56041356s" podCreationTimestamp="2025-11-28 17:20:18 +0000 UTC" firstStartedPulling="2025-11-28 17:20:38.258406779 +0000 UTC m=+1340.307327674" lastFinishedPulling="2025-11-28 17:20:47.641368392 +0000 UTC m=+1349.690289297" observedRunningTime="2025-11-28 17:20:48.545099957 +0000 UTC m=+1350.594020862" watchObservedRunningTime="2025-11-28 17:20:48.56041356 +0000 UTC m=+1350.609334465" Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.600444 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=24.549650639 podStartE2EDuration="32.600422282s" podCreationTimestamp="2025-11-28 17:20:16 +0000 UTC" firstStartedPulling="2025-11-28 17:20:38.390876395 +0000 UTC m=+1340.439797300" lastFinishedPulling="2025-11-28 17:20:46.441648038 +0000 UTC m=+1348.490568943" observedRunningTime="2025-11-28 17:20:48.595528873 +0000 UTC m=+1350.644449778" watchObservedRunningTime="2025-11-28 17:20:48.600422282 +0000 UTC m=+1350.649343187" Nov 28 17:20:48 crc kubenswrapper[5024]: I1128 17:20:48.643894 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5tbx\" (UniqueName: \"kubernetes.io/projected/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d-kube-api-access-q5tbx\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:49 crc kubenswrapper[5024]: I1128 17:20:49.511596 5024 generic.go:334] "Generic (PLEG): container finished" podID="b7387769-8164-4608-aa9a-51bf86870cad" containerID="5b0eeb50bcf2ad6734ab051fc862fe1fc245ae744b04697f277518580a3099d0" exitCode=0 Nov 28 17:20:49 crc kubenswrapper[5024]: I1128 17:20:49.511669 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tst7t" event={"ID":"b7387769-8164-4608-aa9a-51bf86870cad","Type":"ContainerDied","Data":"5b0eeb50bcf2ad6734ab051fc862fe1fc245ae744b04697f277518580a3099d0"} Nov 28 17:20:49 crc kubenswrapper[5024]: I1128 17:20:49.514635 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c48cac67-542a-4982-98f3-19161065f4fc","Type":"ContainerStarted","Data":"b41560ff1c9095e5c76c904102f2614192b2323b7c5a0a7e0ea7b0b8808bed08"} Nov 28 17:20:49 crc kubenswrapper[5024]: I1128 17:20:49.517280 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4czkq" event={"ID":"8f9a33b1-4a6d-46f2-a251-d4f75fa7171d","Type":"ContainerDied","Data":"d4e2e65e75a1be9bc126cba4b79a5d1ec0c2fb3790ef9803967ba4b20aeb16a3"} Nov 28 17:20:49 crc kubenswrapper[5024]: I1128 17:20:49.517404 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4czkq" Nov 28 17:20:49 crc kubenswrapper[5024]: I1128 17:20:49.528541 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gwmd4" event={"ID":"50b88778-9829-4418-bfc4-a7377039d584","Type":"ContainerStarted","Data":"a600c2ae102bc0b66d8d8181137c37a6e9a186833a005d214fbddf5cad808ce1"} Nov 28 17:20:49 crc kubenswrapper[5024]: I1128 17:20:49.538383 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"89e70753-1dcf-4ff8-8859-5bd6d55cbe47","Type":"ContainerStarted","Data":"fa42ad9638e6f72988e89e9d48dd7126debebfbb2aa6a1ac1efbafb1a6f5d551"} Nov 28 17:20:49 crc kubenswrapper[5024]: I1128 17:20:49.718355 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4czkq"] Nov 28 17:20:49 crc kubenswrapper[5024]: I1128 17:20:49.733174 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4czkq"] Nov 28 17:20:50 crc kubenswrapper[5024]: I1128 17:20:50.512897 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f9a33b1-4a6d-46f2-a251-d4f75fa7171d" path="/var/lib/kubelet/pods/8f9a33b1-4a6d-46f2-a251-d4f75fa7171d/volumes" Nov 28 17:20:50 crc kubenswrapper[5024]: I1128 17:20:50.557045 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a8a5d6d-4404-4848-a8b9-d47cee1e350d","Type":"ContainerStarted","Data":"b395afa75b0ad17f7cdd1cbdf43f18a7de598ef4be44dc4db2bef1b45e1a42fc"} Nov 28 17:20:50 crc kubenswrapper[5024]: I1128 17:20:50.567280 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tst7t" event={"ID":"b7387769-8164-4608-aa9a-51bf86870cad","Type":"ContainerStarted","Data":"586bce4f812a254f87bf55054a1a45ac583b527daf9750efc2c4c00d13172acb"} Nov 28 17:20:52 crc kubenswrapper[5024]: I1128 17:20:52.319390 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 28 17:20:54 crc kubenswrapper[5024]: I1128 17:20:54.607753 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tst7t" event={"ID":"b7387769-8164-4608-aa9a-51bf86870cad","Type":"ContainerStarted","Data":"8e16cf6142e087787a63dfd8b6a832c08f5d391dd9c07ee3d2070b71b6e28a18"} Nov 28 17:20:55 crc kubenswrapper[5024]: I1128 17:20:55.617710 5024 generic.go:334] "Generic (PLEG): container finished" podID="89e70753-1dcf-4ff8-8859-5bd6d55cbe47" containerID="fa42ad9638e6f72988e89e9d48dd7126debebfbb2aa6a1ac1efbafb1a6f5d551" exitCode=0 Nov 28 17:20:55 crc kubenswrapper[5024]: I1128 17:20:55.617800 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"89e70753-1dcf-4ff8-8859-5bd6d55cbe47","Type":"ContainerDied","Data":"fa42ad9638e6f72988e89e9d48dd7126debebfbb2aa6a1ac1efbafb1a6f5d551"} Nov 28 17:20:55 crc kubenswrapper[5024]: I1128 17:20:55.620475 5024 generic.go:334] "Generic (PLEG): container finished" podID="27bdb46e-71e8-41d7-b796-b10d95025f95" containerID="3013cd720706d83a041823c54db1e9cb41a90f8545c5be70389acc1aefaa9630" exitCode=0 Nov 28 17:20:55 crc kubenswrapper[5024]: I1128 17:20:55.620589 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"27bdb46e-71e8-41d7-b796-b10d95025f95","Type":"ContainerDied","Data":"3013cd720706d83a041823c54db1e9cb41a90f8545c5be70389acc1aefaa9630"} Nov 28 17:20:57 crc kubenswrapper[5024]: I1128 17:20:57.676717 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"27bdb46e-71e8-41d7-b796-b10d95025f95","Type":"ContainerStarted","Data":"ced19a89e9e40cd9c56f5f216bcda21ff0c623f51f8c9aed0fcce1e3e438ffd5"} Nov 28 17:20:57 crc kubenswrapper[5024]: I1128 17:20:57.679176 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"67f2019a-e1ff-46c7-9ec9-a1762e83f1c1","Type":"ContainerStarted","Data":"f124a54eae8009cc80b9fafadb06459ab60888027904f554bbade0ab95ecd553"} Nov 28 17:20:57 crc kubenswrapper[5024]: I1128 17:20:57.680849 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"620e671a-94a6-4ebb-807d-88c062028090","Type":"ContainerStarted","Data":"40a7ed8a6cd8a2508911d2434aa3115a88c93af62dfcec655ef5422f92f5f446"} Nov 28 17:20:57 crc kubenswrapper[5024]: I1128 17:20:57.683267 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"89e70753-1dcf-4ff8-8859-5bd6d55cbe47","Type":"ContainerStarted","Data":"b052596256c3de7ece22f2f125d18f4b12035e9cedd36d2564798bd06b150063"} Nov 28 17:20:57 crc kubenswrapper[5024]: I1128 17:20:57.683535 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:57 crc kubenswrapper[5024]: I1128 17:20:57.683569 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:20:57 crc kubenswrapper[5024]: I1128 17:20:57.732937 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.439224668 podStartE2EDuration="34.732912544s" podCreationTimestamp="2025-11-28 17:20:23 +0000 UTC" firstStartedPulling="2025-11-28 17:20:37.963948739 +0000 UTC m=+1340.012869644" lastFinishedPulling="2025-11-28 17:20:57.257636615 +0000 UTC m=+1359.306557520" observedRunningTime="2025-11-28 17:20:57.726537506 +0000 UTC m=+1359.775458411" watchObservedRunningTime="2025-11-28 17:20:57.732912544 +0000 UTC m=+1359.781833449" Nov 28 17:20:57 crc kubenswrapper[5024]: I1128 17:20:57.734668 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=34.547689374 podStartE2EDuration="42.73465625s" podCreationTimestamp="2025-11-28 17:20:15 +0000 UTC" firstStartedPulling="2025-11-28 17:20:38.741588806 +0000 UTC m=+1340.790509711" lastFinishedPulling="2025-11-28 17:20:46.928555682 +0000 UTC m=+1348.977476587" observedRunningTime="2025-11-28 17:20:57.707555996 +0000 UTC m=+1359.756476911" watchObservedRunningTime="2025-11-28 17:20:57.73465625 +0000 UTC m=+1359.783577155" Nov 28 17:20:57 crc kubenswrapper[5024]: I1128 17:20:57.759898 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-tst7t" podStartSLOduration=26.539306132 podStartE2EDuration="34.759875273s" podCreationTimestamp="2025-11-28 17:20:23 +0000 UTC" firstStartedPulling="2025-11-28 17:20:38.707630182 +0000 UTC m=+1340.756551087" lastFinishedPulling="2025-11-28 17:20:46.928199323 +0000 UTC m=+1348.977120228" observedRunningTime="2025-11-28 17:20:57.752089588 +0000 UTC m=+1359.801010503" watchObservedRunningTime="2025-11-28 17:20:57.759875273 +0000 UTC m=+1359.808796178" Nov 28 17:20:57 crc kubenswrapper[5024]: I1128 17:20:57.782010 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=35.074348905 podStartE2EDuration="43.781992425s" podCreationTimestamp="2025-11-28 17:20:14 +0000 UTC" firstStartedPulling="2025-11-28 17:20:38.711946366 +0000 UTC m=+1340.760867281" lastFinishedPulling="2025-11-28 17:20:47.419589896 +0000 UTC m=+1349.468510801" observedRunningTime="2025-11-28 17:20:57.775824103 +0000 UTC m=+1359.824745008" watchObservedRunningTime="2025-11-28 17:20:57.781992425 +0000 UTC m=+1359.830913340" Nov 28 17:20:57 crc kubenswrapper[5024]: I1128 17:20:57.849102 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=14.326989662 podStartE2EDuration="32.849085041s" podCreationTimestamp="2025-11-28 17:20:25 +0000 UTC" firstStartedPulling="2025-11-28 17:20:38.767322983 +0000 UTC m=+1340.816243888" lastFinishedPulling="2025-11-28 17:20:57.289418362 +0000 UTC m=+1359.338339267" observedRunningTime="2025-11-28 17:20:57.844651835 +0000 UTC m=+1359.893572740" watchObservedRunningTime="2025-11-28 17:20:57.849085041 +0000 UTC m=+1359.898005936" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.408263 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.412863 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.415327 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-jf7lk"] Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.417403 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.442950 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-jf7lk"] Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.531933 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs669\" (UniqueName: \"kubernetes.io/projected/75947d99-a968-43ed-bddc-4742a3628dfa-kube-api-access-rs669\") pod \"dnsmasq-dns-7cb5889db5-jf7lk\" (UID: \"75947d99-a968-43ed-bddc-4742a3628dfa\") " pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.532487 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75947d99-a968-43ed-bddc-4742a3628dfa-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-jf7lk\" (UID: \"75947d99-a968-43ed-bddc-4742a3628dfa\") " pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.533812 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75947d99-a968-43ed-bddc-4742a3628dfa-config\") pod \"dnsmasq-dns-7cb5889db5-jf7lk\" (UID: \"75947d99-a968-43ed-bddc-4742a3628dfa\") " pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.614201 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.640816 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75947d99-a968-43ed-bddc-4742a3628dfa-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-jf7lk\" (UID: \"75947d99-a968-43ed-bddc-4742a3628dfa\") " pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.640994 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75947d99-a968-43ed-bddc-4742a3628dfa-config\") pod \"dnsmasq-dns-7cb5889db5-jf7lk\" (UID: \"75947d99-a968-43ed-bddc-4742a3628dfa\") " pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.641161 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs669\" (UniqueName: \"kubernetes.io/projected/75947d99-a968-43ed-bddc-4742a3628dfa-kube-api-access-rs669\") pod \"dnsmasq-dns-7cb5889db5-jf7lk\" (UID: \"75947d99-a968-43ed-bddc-4742a3628dfa\") " pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.642333 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75947d99-a968-43ed-bddc-4742a3628dfa-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-jf7lk\" (UID: \"75947d99-a968-43ed-bddc-4742a3628dfa\") " pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.642491 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75947d99-a968-43ed-bddc-4742a3628dfa-config\") pod \"dnsmasq-dns-7cb5889db5-jf7lk\" (UID: \"75947d99-a968-43ed-bddc-4742a3628dfa\") " pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.667677 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.669625 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs669\" (UniqueName: \"kubernetes.io/projected/75947d99-a968-43ed-bddc-4742a3628dfa-kube-api-access-rs669\") pod \"dnsmasq-dns-7cb5889db5-jf7lk\" (UID: \"75947d99-a968-43ed-bddc-4742a3628dfa\") " pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.703918 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.745084 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" Nov 28 17:20:59 crc kubenswrapper[5024]: I1128 17:20:59.754501 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.063607 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-jf7lk"] Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.074157 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-n7llb"] Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.076073 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.086670 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.091959 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-n7llb"] Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.111398 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-m9j6q"] Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.113508 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.118778 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.155398 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-config\") pod \"dnsmasq-dns-6c89d5d749-m9j6q\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.155482 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fspnh\" (UniqueName: \"kubernetes.io/projected/9a326ee1-ef89-452c-a314-fff7af6fb65f-kube-api-access-fspnh\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.155542 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfzh2\" (UniqueName: \"kubernetes.io/projected/997d6659-56f8-4351-8391-ed9a3b38f63f-kube-api-access-cfzh2\") pod \"dnsmasq-dns-6c89d5d749-m9j6q\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.155760 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-m9j6q\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.155822 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a326ee1-ef89-452c-a314-fff7af6fb65f-config\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.155843 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9a326ee1-ef89-452c-a314-fff7af6fb65f-ovn-rundir\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.155886 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a326ee1-ef89-452c-a314-fff7af6fb65f-combined-ca-bundle\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.155908 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9a326ee1-ef89-452c-a314-fff7af6fb65f-ovs-rundir\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.155960 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a326ee1-ef89-452c-a314-fff7af6fb65f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.156003 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-m9j6q\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.200499 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-m9j6q"] Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.259683 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-config\") pod \"dnsmasq-dns-6c89d5d749-m9j6q\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.259750 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fspnh\" (UniqueName: \"kubernetes.io/projected/9a326ee1-ef89-452c-a314-fff7af6fb65f-kube-api-access-fspnh\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.259799 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfzh2\" (UniqueName: \"kubernetes.io/projected/997d6659-56f8-4351-8391-ed9a3b38f63f-kube-api-access-cfzh2\") pod \"dnsmasq-dns-6c89d5d749-m9j6q\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.259869 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-m9j6q\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.259926 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a326ee1-ef89-452c-a314-fff7af6fb65f-config\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.259941 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9a326ee1-ef89-452c-a314-fff7af6fb65f-ovn-rundir\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.259985 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a326ee1-ef89-452c-a314-fff7af6fb65f-combined-ca-bundle\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.260005 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9a326ee1-ef89-452c-a314-fff7af6fb65f-ovs-rundir\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.260089 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a326ee1-ef89-452c-a314-fff7af6fb65f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.260108 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-m9j6q\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.261505 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-config\") pod \"dnsmasq-dns-6c89d5d749-m9j6q\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.261861 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-m9j6q\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.262338 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a326ee1-ef89-452c-a314-fff7af6fb65f-config\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.262427 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9a326ee1-ef89-452c-a314-fff7af6fb65f-ovs-rundir\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.262539 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9a326ee1-ef89-452c-a314-fff7af6fb65f-ovn-rundir\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.263420 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-m9j6q\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.279245 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a326ee1-ef89-452c-a314-fff7af6fb65f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.279778 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a326ee1-ef89-452c-a314-fff7af6fb65f-combined-ca-bundle\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.292986 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fspnh\" (UniqueName: \"kubernetes.io/projected/9a326ee1-ef89-452c-a314-fff7af6fb65f-kube-api-access-fspnh\") pod \"ovn-controller-metrics-n7llb\" (UID: \"9a326ee1-ef89-452c-a314-fff7af6fb65f\") " pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.294090 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfzh2\" (UniqueName: \"kubernetes.io/projected/997d6659-56f8-4351-8391-ed9a3b38f63f-kube-api-access-cfzh2\") pod \"dnsmasq-dns-6c89d5d749-m9j6q\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.313597 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-jf7lk"] Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.408593 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-n7llb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.409260 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.449338 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.456517 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-m9j6q"] Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.499508 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.529442 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-22p46"] Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.537604 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.541523 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.575161 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-config\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.575232 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-dns-svc\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.575471 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.575526 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plr5t\" (UniqueName: \"kubernetes.io/projected/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-kube-api-access-plr5t\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.575570 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.576941 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-22p46"] Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.608847 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.642056 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.645485 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-k88jl" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.645672 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.646618 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.647223 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.654989 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.686639 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.686693 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plr5t\" (UniqueName: \"kubernetes.io/projected/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-kube-api-access-plr5t\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.686727 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.686837 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-config\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.686862 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-dns-svc\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.687950 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.689547 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.692709 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-config\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.693859 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-dns-svc\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.721116 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plr5t\" (UniqueName: \"kubernetes.io/projected/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-kube-api-access-plr5t\") pod \"dnsmasq-dns-698758b865-22p46\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.742209 5024 generic.go:334] "Generic (PLEG): container finished" podID="75947d99-a968-43ed-bddc-4742a3628dfa" containerID="30080a6bc05b1671241307e1b8510c4ac42f94d648053af1b7856ca57c0af177" exitCode=0 Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.742575 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" event={"ID":"75947d99-a968-43ed-bddc-4742a3628dfa","Type":"ContainerDied","Data":"30080a6bc05b1671241307e1b8510c4ac42f94d648053af1b7856ca57c0af177"} Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.742672 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" event={"ID":"75947d99-a968-43ed-bddc-4742a3628dfa","Type":"ContainerStarted","Data":"b4759f171155f686e76205ea60a87df664eb55f864334bbe3c93ed766ea2c340"} Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.790499 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.790860 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgrjq\" (UniqueName: \"kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-kube-api-access-dgrjq\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.790912 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-lock\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.791191 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.791256 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-cache\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.888477 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.896628 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.896688 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgrjq\" (UniqueName: \"kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-kube-api-access-dgrjq\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.896771 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-lock\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.896948 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.896991 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-cache\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.897655 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.900215 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-lock\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.900651 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: E1128 17:21:00.900763 5024 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 17:21:00 crc kubenswrapper[5024]: E1128 17:21:00.900778 5024 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 17:21:00 crc kubenswrapper[5024]: E1128 17:21:00.900814 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift podName:aa2554f8-7d4e-425d-a74a-3322dc09d7ed nodeName:}" failed. No retries permitted until 2025-11-28 17:21:01.400800527 +0000 UTC m=+1363.449721432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift") pod "swift-storage-0" (UID: "aa2554f8-7d4e-425d-a74a-3322dc09d7ed") : configmap "swift-ring-files" not found Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.901668 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-cache\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.937078 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgrjq\" (UniqueName: \"kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-kube-api-access-dgrjq\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.945141 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-n7llb"] Nov 28 17:21:00 crc kubenswrapper[5024]: I1128 17:21:00.989077 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.214090 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-hbk2s"] Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.216672 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.227474 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.228303 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.228462 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.280955 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hbk2s"] Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.311405 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-swiftconf\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.321300 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fc7g\" (UniqueName: \"kubernetes.io/projected/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-kube-api-access-9fc7g\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.321366 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-combined-ca-bundle\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.321452 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-etc-swift\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.321549 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-ring-data-devices\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.321682 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-dispersionconf\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.321753 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-scripts\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.350134 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-m9j6q"] Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.363977 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.366331 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.381596 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-4z798" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.381848 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.382008 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.382254 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.394122 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.424477 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khx8k\" (UniqueName: \"kubernetes.io/projected/4ff0447c-7f25-4d0a-a58b-d5fff6673749-kube-api-access-khx8k\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.426255 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-dispersionconf\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.426379 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ff0447c-7f25-4d0a-a58b-d5fff6673749-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.426531 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.426726 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-scripts\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.426868 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ff0447c-7f25-4d0a-a58b-d5fff6673749-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.427330 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ff0447c-7f25-4d0a-a58b-d5fff6673749-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: E1128 17:21:01.427408 5024 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 17:21:01 crc kubenswrapper[5024]: E1128 17:21:01.427444 5024 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 17:21:01 crc kubenswrapper[5024]: E1128 17:21:01.427512 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift podName:aa2554f8-7d4e-425d-a74a-3322dc09d7ed nodeName:}" failed. No retries permitted until 2025-11-28 17:21:02.427489179 +0000 UTC m=+1364.476410244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift") pod "swift-storage-0" (UID: "aa2554f8-7d4e-425d-a74a-3322dc09d7ed") : configmap "swift-ring-files" not found Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.427549 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-swiftconf\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.427754 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ff0447c-7f25-4d0a-a58b-d5fff6673749-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.427888 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fc7g\" (UniqueName: \"kubernetes.io/projected/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-kube-api-access-9fc7g\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.427976 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-combined-ca-bundle\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.428100 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-etc-swift\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.428280 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ff0447c-7f25-4d0a-a58b-d5fff6673749-config\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.428406 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-ring-data-devices\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.428517 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ff0447c-7f25-4d0a-a58b-d5fff6673749-scripts\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.429312 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-scripts\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.430527 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-etc-swift\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.432254 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-ring-data-devices\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.432935 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-dispersionconf\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.433630 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-swiftconf\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.451007 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fc7g\" (UniqueName: \"kubernetes.io/projected/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-kube-api-access-9fc7g\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.456320 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-combined-ca-bundle\") pod \"swift-ring-rebalance-hbk2s\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.531714 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ff0447c-7f25-4d0a-a58b-d5fff6673749-scripts\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.531789 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khx8k\" (UniqueName: \"kubernetes.io/projected/4ff0447c-7f25-4d0a-a58b-d5fff6673749-kube-api-access-khx8k\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.531842 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ff0447c-7f25-4d0a-a58b-d5fff6673749-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.531895 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ff0447c-7f25-4d0a-a58b-d5fff6673749-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.532056 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ff0447c-7f25-4d0a-a58b-d5fff6673749-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.532222 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ff0447c-7f25-4d0a-a58b-d5fff6673749-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.532468 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ff0447c-7f25-4d0a-a58b-d5fff6673749-config\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.534302 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ff0447c-7f25-4d0a-a58b-d5fff6673749-config\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.534860 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4ff0447c-7f25-4d0a-a58b-d5fff6673749-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.535294 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ff0447c-7f25-4d0a-a58b-d5fff6673749-scripts\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.538863 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ff0447c-7f25-4d0a-a58b-d5fff6673749-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.541626 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ff0447c-7f25-4d0a-a58b-d5fff6673749-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.544052 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ff0447c-7f25-4d0a-a58b-d5fff6673749-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.560042 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khx8k\" (UniqueName: \"kubernetes.io/projected/4ff0447c-7f25-4d0a-a58b-d5fff6673749-kube-api-access-khx8k\") pod \"ovn-northd-0\" (UID: \"4ff0447c-7f25-4d0a-a58b-d5fff6673749\") " pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.581643 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.628132 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.738365 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75947d99-a968-43ed-bddc-4742a3628dfa-config\") pod \"75947d99-a968-43ed-bddc-4742a3628dfa\" (UID: \"75947d99-a968-43ed-bddc-4742a3628dfa\") " Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.740115 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs669\" (UniqueName: \"kubernetes.io/projected/75947d99-a968-43ed-bddc-4742a3628dfa-kube-api-access-rs669\") pod \"75947d99-a968-43ed-bddc-4742a3628dfa\" (UID: \"75947d99-a968-43ed-bddc-4742a3628dfa\") " Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.742372 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75947d99-a968-43ed-bddc-4742a3628dfa-dns-svc\") pod \"75947d99-a968-43ed-bddc-4742a3628dfa\" (UID: \"75947d99-a968-43ed-bddc-4742a3628dfa\") " Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.739475 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.749115 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75947d99-a968-43ed-bddc-4742a3628dfa-kube-api-access-rs669" (OuterVolumeSpecName: "kube-api-access-rs669") pod "75947d99-a968-43ed-bddc-4742a3628dfa" (UID: "75947d99-a968-43ed-bddc-4742a3628dfa"). InnerVolumeSpecName "kube-api-access-rs669". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.779093 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75947d99-a968-43ed-bddc-4742a3628dfa-config" (OuterVolumeSpecName: "config") pod "75947d99-a968-43ed-bddc-4742a3628dfa" (UID: "75947d99-a968-43ed-bddc-4742a3628dfa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.798560 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75947d99-a968-43ed-bddc-4742a3628dfa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75947d99-a968-43ed-bddc-4742a3628dfa" (UID: "75947d99-a968-43ed-bddc-4742a3628dfa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.808460 5024 generic.go:334] "Generic (PLEG): container finished" podID="997d6659-56f8-4351-8391-ed9a3b38f63f" containerID="ca7ea6c0efe5aac33ebec12289b2d5f8397dd2a091e45929ca3bfea0a95bd775" exitCode=0 Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.808579 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" event={"ID":"997d6659-56f8-4351-8391-ed9a3b38f63f","Type":"ContainerDied","Data":"ca7ea6c0efe5aac33ebec12289b2d5f8397dd2a091e45929ca3bfea0a95bd775"} Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.808615 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" event={"ID":"997d6659-56f8-4351-8391-ed9a3b38f63f","Type":"ContainerStarted","Data":"2506f6ec4a6c235f153db141b6db65a19820d7f0307ddd31f5d68019d27fbfab"} Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.846924 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75947d99-a968-43ed-bddc-4742a3628dfa-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.847264 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs669\" (UniqueName: \"kubernetes.io/projected/75947d99-a968-43ed-bddc-4742a3628dfa-kube-api-access-rs669\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.847275 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75947d99-a968-43ed-bddc-4742a3628dfa-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.854390 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-n7llb" event={"ID":"9a326ee1-ef89-452c-a314-fff7af6fb65f","Type":"ContainerStarted","Data":"e742d917b43313279804329e4d9555fb00e729242870c8360909149e3d181e31"} Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.854472 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-n7llb" event={"ID":"9a326ee1-ef89-452c-a314-fff7af6fb65f","Type":"ContainerStarted","Data":"3b2c7632fa38bef528c80d37e5e20170721ad3d799cfd3937db7b6f4cb1a3455"} Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.855669 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-22p46"] Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.884309 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.887167 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-jf7lk" event={"ID":"75947d99-a968-43ed-bddc-4742a3628dfa","Type":"ContainerDied","Data":"b4759f171155f686e76205ea60a87df664eb55f864334bbe3c93ed766ea2c340"} Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.887251 5024 scope.go:117] "RemoveContainer" containerID="30080a6bc05b1671241307e1b8510c4ac42f94d648053af1b7856ca57c0af177" Nov 28 17:21:01 crc kubenswrapper[5024]: I1128 17:21:01.890876 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-n7llb" podStartSLOduration=1.890852394 podStartE2EDuration="1.890852394s" podCreationTimestamp="2025-11-28 17:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:01.888991045 +0000 UTC m=+1363.937911950" watchObservedRunningTime="2025-11-28 17:21:01.890852394 +0000 UTC m=+1363.939773289" Nov 28 17:21:01 crc kubenswrapper[5024]: W1128 17:21:01.937929 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49e8e3e8_5ba4_4a0f_a1df_889e581a1d7b.slice/crio-445331f540351ad5924a2765b9409848cdcc4a7264e656051ddd4839f76101ed WatchSource:0}: Error finding container 445331f540351ad5924a2765b9409848cdcc4a7264e656051ddd4839f76101ed: Status 404 returned error can't find the container with id 445331f540351ad5924a2765b9409848cdcc4a7264e656051ddd4839f76101ed Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.110692 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-jf7lk"] Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.132642 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-jf7lk"] Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.453814 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hbk2s"] Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.469803 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:02 crc kubenswrapper[5024]: E1128 17:21:02.470089 5024 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 17:21:02 crc kubenswrapper[5024]: E1128 17:21:02.470107 5024 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 17:21:02 crc kubenswrapper[5024]: E1128 17:21:02.470174 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift podName:aa2554f8-7d4e-425d-a74a-3322dc09d7ed nodeName:}" failed. No retries permitted until 2025-11-28 17:21:04.47015113 +0000 UTC m=+1366.519072035 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift") pod "swift-storage-0" (UID: "aa2554f8-7d4e-425d-a74a-3322dc09d7ed") : configmap "swift-ring-files" not found Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.518487 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75947d99-a968-43ed-bddc-4742a3628dfa" path="/var/lib/kubelet/pods/75947d99-a968-43ed-bddc-4742a3628dfa/volumes" Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.529391 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.571586 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-ovsdbserver-sb\") pod \"997d6659-56f8-4351-8391-ed9a3b38f63f\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.571871 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfzh2\" (UniqueName: \"kubernetes.io/projected/997d6659-56f8-4351-8391-ed9a3b38f63f-kube-api-access-cfzh2\") pod \"997d6659-56f8-4351-8391-ed9a3b38f63f\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.571913 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-config\") pod \"997d6659-56f8-4351-8391-ed9a3b38f63f\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.571957 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-dns-svc\") pod \"997d6659-56f8-4351-8391-ed9a3b38f63f\" (UID: \"997d6659-56f8-4351-8391-ed9a3b38f63f\") " Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.578282 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/997d6659-56f8-4351-8391-ed9a3b38f63f-kube-api-access-cfzh2" (OuterVolumeSpecName: "kube-api-access-cfzh2") pod "997d6659-56f8-4351-8391-ed9a3b38f63f" (UID: "997d6659-56f8-4351-8391-ed9a3b38f63f"). InnerVolumeSpecName "kube-api-access-cfzh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.603749 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "997d6659-56f8-4351-8391-ed9a3b38f63f" (UID: "997d6659-56f8-4351-8391-ed9a3b38f63f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.614567 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "997d6659-56f8-4351-8391-ed9a3b38f63f" (UID: "997d6659-56f8-4351-8391-ed9a3b38f63f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.668126 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-config" (OuterVolumeSpecName: "config") pod "997d6659-56f8-4351-8391-ed9a3b38f63f" (UID: "997d6659-56f8-4351-8391-ed9a3b38f63f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.674713 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfzh2\" (UniqueName: \"kubernetes.io/projected/997d6659-56f8-4351-8391-ed9a3b38f63f-kube-api-access-cfzh2\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.674744 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.674752 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.674762 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/997d6659-56f8-4351-8391-ed9a3b38f63f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.678131 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.901204 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hbk2s" event={"ID":"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd","Type":"ContainerStarted","Data":"f360ebcc730c380c380be00c073ccbe0bd582bda2c94e0197d911ef4d7ea7709"} Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.907163 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.907190 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-m9j6q" event={"ID":"997d6659-56f8-4351-8391-ed9a3b38f63f","Type":"ContainerDied","Data":"2506f6ec4a6c235f153db141b6db65a19820d7f0307ddd31f5d68019d27fbfab"} Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.907265 5024 scope.go:117] "RemoveContainer" containerID="ca7ea6c0efe5aac33ebec12289b2d5f8397dd2a091e45929ca3bfea0a95bd775" Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.910275 5024 generic.go:334] "Generic (PLEG): container finished" podID="49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" containerID="9f94e54e9736c86811816900e1f0babcb022ad5a4be373abffea74a8f143c8d5" exitCode=0 Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.910348 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-22p46" event={"ID":"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b","Type":"ContainerDied","Data":"9f94e54e9736c86811816900e1f0babcb022ad5a4be373abffea74a8f143c8d5"} Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.910375 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-22p46" event={"ID":"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b","Type":"ContainerStarted","Data":"445331f540351ad5924a2765b9409848cdcc4a7264e656051ddd4839f76101ed"} Nov 28 17:21:02 crc kubenswrapper[5024]: I1128 17:21:02.921955 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4ff0447c-7f25-4d0a-a58b-d5fff6673749","Type":"ContainerStarted","Data":"824ef1ff916968acc29602f594d26fb496c1e9e5847a01a71525bdb18bc192c8"} Nov 28 17:21:03 crc kubenswrapper[5024]: I1128 17:21:03.235013 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-m9j6q"] Nov 28 17:21:03 crc kubenswrapper[5024]: I1128 17:21:03.245425 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-m9j6q"] Nov 28 17:21:04 crc kubenswrapper[5024]: I1128 17:21:04.069082 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-22p46" event={"ID":"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b","Type":"ContainerStarted","Data":"55b0b60c3bba6dda4c197e053a1481f781982e775595fcdcd13b3cb84da6967a"} Nov 28 17:21:04 crc kubenswrapper[5024]: I1128 17:21:04.069154 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:04 crc kubenswrapper[5024]: I1128 17:21:04.075539 5024 generic.go:334] "Generic (PLEG): container finished" podID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerID="b395afa75b0ad17f7cdd1cbdf43f18a7de598ef4be44dc4db2bef1b45e1a42fc" exitCode=0 Nov 28 17:21:04 crc kubenswrapper[5024]: I1128 17:21:04.075574 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a8a5d6d-4404-4848-a8b9-d47cee1e350d","Type":"ContainerDied","Data":"b395afa75b0ad17f7cdd1cbdf43f18a7de598ef4be44dc4db2bef1b45e1a42fc"} Nov 28 17:21:04 crc kubenswrapper[5024]: I1128 17:21:04.093066 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-22p46" podStartSLOduration=4.093048941 podStartE2EDuration="4.093048941s" podCreationTimestamp="2025-11-28 17:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:04.08880941 +0000 UTC m=+1366.137730335" watchObservedRunningTime="2025-11-28 17:21:04.093048941 +0000 UTC m=+1366.141969836" Nov 28 17:21:04 crc kubenswrapper[5024]: I1128 17:21:04.514242 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="997d6659-56f8-4351-8391-ed9a3b38f63f" path="/var/lib/kubelet/pods/997d6659-56f8-4351-8391-ed9a3b38f63f/volumes" Nov 28 17:21:04 crc kubenswrapper[5024]: I1128 17:21:04.550407 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:04 crc kubenswrapper[5024]: E1128 17:21:04.550959 5024 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 17:21:04 crc kubenswrapper[5024]: E1128 17:21:04.550985 5024 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 17:21:04 crc kubenswrapper[5024]: E1128 17:21:04.551057 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift podName:aa2554f8-7d4e-425d-a74a-3322dc09d7ed nodeName:}" failed. No retries permitted until 2025-11-28 17:21:08.551037925 +0000 UTC m=+1370.599958830 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift") pod "swift-storage-0" (UID: "aa2554f8-7d4e-425d-a74a-3322dc09d7ed") : configmap "swift-ring-files" not found Nov 28 17:21:05 crc kubenswrapper[5024]: I1128 17:21:05.449213 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 28 17:21:05 crc kubenswrapper[5024]: I1128 17:21:05.449492 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 28 17:21:05 crc kubenswrapper[5024]: I1128 17:21:05.525256 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 28 17:21:06 crc kubenswrapper[5024]: I1128 17:21:06.196773 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 28 17:21:06 crc kubenswrapper[5024]: I1128 17:21:06.535580 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-78fdf7cd4f-99mvs" podUID="24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b" containerName="console" containerID="cri-o://f01fb5ee8b5feb089b7dd26a3d34261a5c738f1e580b0e144d42a6555eed1493" gracePeriod=15 Nov 28 17:21:06 crc kubenswrapper[5024]: I1128 17:21:06.893853 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 28 17:21:06 crc kubenswrapper[5024]: I1128 17:21:06.894342 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 28 17:21:06 crc kubenswrapper[5024]: I1128 17:21:06.966610 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bb6d-account-create-update-4b8k6"] Nov 28 17:21:06 crc kubenswrapper[5024]: E1128 17:21:06.967430 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75947d99-a968-43ed-bddc-4742a3628dfa" containerName="init" Nov 28 17:21:06 crc kubenswrapper[5024]: I1128 17:21:06.967460 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="75947d99-a968-43ed-bddc-4742a3628dfa" containerName="init" Nov 28 17:21:06 crc kubenswrapper[5024]: E1128 17:21:06.967491 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="997d6659-56f8-4351-8391-ed9a3b38f63f" containerName="init" Nov 28 17:21:06 crc kubenswrapper[5024]: I1128 17:21:06.967500 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="997d6659-56f8-4351-8391-ed9a3b38f63f" containerName="init" Nov 28 17:21:06 crc kubenswrapper[5024]: I1128 17:21:06.967850 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="997d6659-56f8-4351-8391-ed9a3b38f63f" containerName="init" Nov 28 17:21:06 crc kubenswrapper[5024]: I1128 17:21:06.967901 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="75947d99-a968-43ed-bddc-4742a3628dfa" containerName="init" Nov 28 17:21:06 crc kubenswrapper[5024]: I1128 17:21:06.969243 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb6d-account-create-update-4b8k6" Nov 28 17:21:06 crc kubenswrapper[5024]: I1128 17:21:06.972417 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 28 17:21:06 crc kubenswrapper[5024]: I1128 17:21:06.983829 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bb6d-account-create-update-4b8k6"] Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.025996 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8rngl"] Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.027760 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8rngl" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.044375 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8rngl"] Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.064083 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.117205 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-78fdf7cd4f-99mvs_24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b/console/0.log" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.117260 5024 generic.go:334] "Generic (PLEG): container finished" podID="24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b" containerID="f01fb5ee8b5feb089b7dd26a3d34261a5c738f1e580b0e144d42a6555eed1493" exitCode=2 Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.117359 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78fdf7cd4f-99mvs" event={"ID":"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b","Type":"ContainerDied","Data":"f01fb5ee8b5feb089b7dd26a3d34261a5c738f1e580b0e144d42a6555eed1493"} Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.142397 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvfmw\" (UniqueName: \"kubernetes.io/projected/37deb816-c36f-47c7-9d3a-c7373eabeb1f-kube-api-access-vvfmw\") pod \"keystone-bb6d-account-create-update-4b8k6\" (UID: \"37deb816-c36f-47c7-9d3a-c7373eabeb1f\") " pod="openstack/keystone-bb6d-account-create-update-4b8k6" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.142462 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37deb816-c36f-47c7-9d3a-c7373eabeb1f-operator-scripts\") pod \"keystone-bb6d-account-create-update-4b8k6\" (UID: \"37deb816-c36f-47c7-9d3a-c7373eabeb1f\") " pod="openstack/keystone-bb6d-account-create-update-4b8k6" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.142556 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b69e2a-d3f0-49f6-badd-92d6a30ba281-operator-scripts\") pod \"keystone-db-create-8rngl\" (UID: \"d5b69e2a-d3f0-49f6-badd-92d6a30ba281\") " pod="openstack/keystone-db-create-8rngl" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.142653 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6hmm\" (UniqueName: \"kubernetes.io/projected/d5b69e2a-d3f0-49f6-badd-92d6a30ba281-kube-api-access-f6hmm\") pod \"keystone-db-create-8rngl\" (UID: \"d5b69e2a-d3f0-49f6-badd-92d6a30ba281\") " pod="openstack/keystone-db-create-8rngl" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.164901 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-4zgdn"] Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.166781 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4zgdn" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.182350 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4zgdn"] Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.265487 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-70f1-account-create-update-slf46"] Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.273780 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6hmm\" (UniqueName: \"kubernetes.io/projected/d5b69e2a-d3f0-49f6-badd-92d6a30ba281-kube-api-access-f6hmm\") pod \"keystone-db-create-8rngl\" (UID: \"d5b69e2a-d3f0-49f6-badd-92d6a30ba281\") " pod="openstack/keystone-db-create-8rngl" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.274007 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84nmn\" (UniqueName: \"kubernetes.io/projected/9e9c6756-7897-48cb-a004-c8bfe09d4520-kube-api-access-84nmn\") pod \"placement-db-create-4zgdn\" (UID: \"9e9c6756-7897-48cb-a004-c8bfe09d4520\") " pod="openstack/placement-db-create-4zgdn" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.274344 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvfmw\" (UniqueName: \"kubernetes.io/projected/37deb816-c36f-47c7-9d3a-c7373eabeb1f-kube-api-access-vvfmw\") pod \"keystone-bb6d-account-create-update-4b8k6\" (UID: \"37deb816-c36f-47c7-9d3a-c7373eabeb1f\") " pod="openstack/keystone-bb6d-account-create-update-4b8k6" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.274423 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37deb816-c36f-47c7-9d3a-c7373eabeb1f-operator-scripts\") pod \"keystone-bb6d-account-create-update-4b8k6\" (UID: \"37deb816-c36f-47c7-9d3a-c7373eabeb1f\") " pod="openstack/keystone-bb6d-account-create-update-4b8k6" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.274846 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b69e2a-d3f0-49f6-badd-92d6a30ba281-operator-scripts\") pod \"keystone-db-create-8rngl\" (UID: \"d5b69e2a-d3f0-49f6-badd-92d6a30ba281\") " pod="openstack/keystone-db-create-8rngl" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.274891 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9c6756-7897-48cb-a004-c8bfe09d4520-operator-scripts\") pod \"placement-db-create-4zgdn\" (UID: \"9e9c6756-7897-48cb-a004-c8bfe09d4520\") " pod="openstack/placement-db-create-4zgdn" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.279200 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b69e2a-d3f0-49f6-badd-92d6a30ba281-operator-scripts\") pod \"keystone-db-create-8rngl\" (UID: \"d5b69e2a-d3f0-49f6-badd-92d6a30ba281\") " pod="openstack/keystone-db-create-8rngl" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.280541 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37deb816-c36f-47c7-9d3a-c7373eabeb1f-operator-scripts\") pod \"keystone-bb6d-account-create-update-4b8k6\" (UID: \"37deb816-c36f-47c7-9d3a-c7373eabeb1f\") " pod="openstack/keystone-bb6d-account-create-update-4b8k6" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.297433 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-70f1-account-create-update-slf46" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.300485 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.303390 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-70f1-account-create-update-slf46"] Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.304271 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvfmw\" (UniqueName: \"kubernetes.io/projected/37deb816-c36f-47c7-9d3a-c7373eabeb1f-kube-api-access-vvfmw\") pod \"keystone-bb6d-account-create-update-4b8k6\" (UID: \"37deb816-c36f-47c7-9d3a-c7373eabeb1f\") " pod="openstack/keystone-bb6d-account-create-update-4b8k6" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.312671 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.321099 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6hmm\" (UniqueName: \"kubernetes.io/projected/d5b69e2a-d3f0-49f6-badd-92d6a30ba281-kube-api-access-f6hmm\") pod \"keystone-db-create-8rngl\" (UID: \"d5b69e2a-d3f0-49f6-badd-92d6a30ba281\") " pod="openstack/keystone-db-create-8rngl" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.349314 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8rngl" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.389011 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9c6756-7897-48cb-a004-c8bfe09d4520-operator-scripts\") pod \"placement-db-create-4zgdn\" (UID: \"9e9c6756-7897-48cb-a004-c8bfe09d4520\") " pod="openstack/placement-db-create-4zgdn" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.389223 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84nmn\" (UniqueName: \"kubernetes.io/projected/9e9c6756-7897-48cb-a004-c8bfe09d4520-kube-api-access-84nmn\") pod \"placement-db-create-4zgdn\" (UID: \"9e9c6756-7897-48cb-a004-c8bfe09d4520\") " pod="openstack/placement-db-create-4zgdn" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.390561 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9c6756-7897-48cb-a004-c8bfe09d4520-operator-scripts\") pod \"placement-db-create-4zgdn\" (UID: \"9e9c6756-7897-48cb-a004-c8bfe09d4520\") " pod="openstack/placement-db-create-4zgdn" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.419531 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84nmn\" (UniqueName: \"kubernetes.io/projected/9e9c6756-7897-48cb-a004-c8bfe09d4520-kube-api-access-84nmn\") pod \"placement-db-create-4zgdn\" (UID: \"9e9c6756-7897-48cb-a004-c8bfe09d4520\") " pod="openstack/placement-db-create-4zgdn" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.493540 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j766\" (UniqueName: \"kubernetes.io/projected/ac813be9-87ac-4fc7-b881-542716b8125d-kube-api-access-5j766\") pod \"placement-70f1-account-create-update-slf46\" (UID: \"ac813be9-87ac-4fc7-b881-542716b8125d\") " pod="openstack/placement-70f1-account-create-update-slf46" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.493738 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac813be9-87ac-4fc7-b881-542716b8125d-operator-scripts\") pod \"placement-70f1-account-create-update-slf46\" (UID: \"ac813be9-87ac-4fc7-b881-542716b8125d\") " pod="openstack/placement-70f1-account-create-update-slf46" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.565337 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.565436 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.568912 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4zgdn" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.593461 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb6d-account-create-update-4b8k6" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.595946 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j766\" (UniqueName: \"kubernetes.io/projected/ac813be9-87ac-4fc7-b881-542716b8125d-kube-api-access-5j766\") pod \"placement-70f1-account-create-update-slf46\" (UID: \"ac813be9-87ac-4fc7-b881-542716b8125d\") " pod="openstack/placement-70f1-account-create-update-slf46" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.596092 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac813be9-87ac-4fc7-b881-542716b8125d-operator-scripts\") pod \"placement-70f1-account-create-update-slf46\" (UID: \"ac813be9-87ac-4fc7-b881-542716b8125d\") " pod="openstack/placement-70f1-account-create-update-slf46" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.597006 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac813be9-87ac-4fc7-b881-542716b8125d-operator-scripts\") pod \"placement-70f1-account-create-update-slf46\" (UID: \"ac813be9-87ac-4fc7-b881-542716b8125d\") " pod="openstack/placement-70f1-account-create-update-slf46" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.617676 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j766\" (UniqueName: \"kubernetes.io/projected/ac813be9-87ac-4fc7-b881-542716b8125d-kube-api-access-5j766\") pod \"placement-70f1-account-create-update-slf46\" (UID: \"ac813be9-87ac-4fc7-b881-542716b8125d\") " pod="openstack/placement-70f1-account-create-update-slf46" Nov 28 17:21:07 crc kubenswrapper[5024]: I1128 17:21:07.719832 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-70f1-account-create-update-slf46" Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.583076 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:08 crc kubenswrapper[5024]: E1128 17:21:08.587098 5024 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 17:21:08 crc kubenswrapper[5024]: E1128 17:21:08.603361 5024 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 17:21:08 crc kubenswrapper[5024]: E1128 17:21:08.603423 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift podName:aa2554f8-7d4e-425d-a74a-3322dc09d7ed nodeName:}" failed. No retries permitted until 2025-11-28 17:21:16.603405476 +0000 UTC m=+1378.652326381 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift") pod "swift-storage-0" (UID: "aa2554f8-7d4e-425d-a74a-3322dc09d7ed") : configmap "swift-ring-files" not found Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.820546 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-78fdf7cd4f-99mvs_24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b/console/0.log" Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.820623 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.915911 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-service-ca\") pod \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.916846 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-service-ca" (OuterVolumeSpecName: "service-ca") pod "24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b" (UID: "24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.919941 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-config\") pod \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.920142 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-serving-cert\") pod \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.920251 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klbwh\" (UniqueName: \"kubernetes.io/projected/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-kube-api-access-klbwh\") pod \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.920289 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-oauth-config\") pod \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.920349 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-trusted-ca-bundle\") pod \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.920377 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-oauth-serving-cert\") pod \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\" (UID: \"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b\") " Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.920383 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-config" (OuterVolumeSpecName: "console-config") pod "24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b" (UID: "24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.921190 5024 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.921216 5024 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.921598 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b" (UID: "24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.922618 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b" (UID: "24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.925232 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b" (UID: "24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.928286 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b" (UID: "24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:08 crc kubenswrapper[5024]: I1128 17:21:08.934471 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-kube-api-access-klbwh" (OuterVolumeSpecName: "kube-api-access-klbwh") pod "24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b" (UID: "24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b"). InnerVolumeSpecName "kube-api-access-klbwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.023133 5024 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.023167 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klbwh\" (UniqueName: \"kubernetes.io/projected/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-kube-api-access-klbwh\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.023179 5024 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.023188 5024 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.023197 5024 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.164337 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-78fdf7cd4f-99mvs_24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b/console/0.log" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.164437 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78fdf7cd4f-99mvs" event={"ID":"24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b","Type":"ContainerDied","Data":"d01b9e46d5c7f2172f90a3af5b754bca6c47e153c88d180b5c167d880178a0df"} Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.164483 5024 scope.go:117] "RemoveContainer" containerID="f01fb5ee8b5feb089b7dd26a3d34261a5c738f1e580b0e144d42a6555eed1493" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.164610 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78fdf7cd4f-99mvs" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.260363 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4ff0447c-7f25-4d0a-a58b-d5fff6673749","Type":"ContainerStarted","Data":"da57c95e48c2cf89a06ac3d699de0047d6abab1d5bac1c00a73b2c1a94502a0f"} Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.317110 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-2gsjw"] Nov 28 17:21:09 crc kubenswrapper[5024]: E1128 17:21:09.317732 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b" containerName="console" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.317754 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b" containerName="console" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.317981 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b" containerName="console" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.318776 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.334293 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hbk2s" event={"ID":"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd","Type":"ContainerStarted","Data":"0e7f91149c3c427e2aed74cc4ee22ec87df9d9b073bcb55e83b7edac47adb2be"} Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.355843 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2294c836-32c8-47eb-b5de-563fca6deda8-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-2gsjw\" (UID: \"2294c836-32c8-47eb-b5de-563fca6deda8\") " pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.356284 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqvz8\" (UniqueName: \"kubernetes.io/projected/2294c836-32c8-47eb-b5de-563fca6deda8-kube-api-access-rqvz8\") pod \"mysqld-exporter-openstack-db-create-2gsjw\" (UID: \"2294c836-32c8-47eb-b5de-563fca6deda8\") " pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" Nov 28 17:21:09 crc kubenswrapper[5024]: E1128 17:21:09.431468 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24c3d9a5_75c8_4dda_a1b5_51f92ac9d59b.slice\": RecentStats: unable to find data in memory cache]" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.460448 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2294c836-32c8-47eb-b5de-563fca6deda8-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-2gsjw\" (UID: \"2294c836-32c8-47eb-b5de-563fca6deda8\") " pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.460878 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqvz8\" (UniqueName: \"kubernetes.io/projected/2294c836-32c8-47eb-b5de-563fca6deda8-kube-api-access-rqvz8\") pod \"mysqld-exporter-openstack-db-create-2gsjw\" (UID: \"2294c836-32c8-47eb-b5de-563fca6deda8\") " pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.482738 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2294c836-32c8-47eb-b5de-563fca6deda8-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-2gsjw\" (UID: \"2294c836-32c8-47eb-b5de-563fca6deda8\") " pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.484432 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqvz8\" (UniqueName: \"kubernetes.io/projected/2294c836-32c8-47eb-b5de-563fca6deda8-kube-api-access-rqvz8\") pod \"mysqld-exporter-openstack-db-create-2gsjw\" (UID: \"2294c836-32c8-47eb-b5de-563fca6deda8\") " pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.495574 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-70f1-account-create-update-slf46"] Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.530479 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-2gsjw"] Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.544455 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-78fdf7cd4f-99mvs"] Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.554908 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.559570 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4zgdn"] Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.573147 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-78fdf7cd4f-99mvs"] Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.585377 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8rngl"] Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.613469 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-hbk2s" podStartSLOduration=2.790839109 podStartE2EDuration="8.613444599s" podCreationTimestamp="2025-11-28 17:21:01 +0000 UTC" firstStartedPulling="2025-11-28 17:21:02.453842221 +0000 UTC m=+1364.502763116" lastFinishedPulling="2025-11-28 17:21:08.276447701 +0000 UTC m=+1370.325368606" observedRunningTime="2025-11-28 17:21:09.383045865 +0000 UTC m=+1371.431966770" watchObservedRunningTime="2025-11-28 17:21:09.613444599 +0000 UTC m=+1371.662365504" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.631126 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bb6d-account-create-update-4b8k6"] Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.643792 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-6839-account-create-update-qx8bd"] Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.645625 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.648176 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.654108 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-6839-account-create-update-qx8bd"] Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.774888 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmvmf\" (UniqueName: \"kubernetes.io/projected/1e284ba5-1197-4d62-8671-b092ab8c8fa7-kube-api-access-nmvmf\") pod \"mysqld-exporter-6839-account-create-update-qx8bd\" (UID: \"1e284ba5-1197-4d62-8671-b092ab8c8fa7\") " pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.775279 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e284ba5-1197-4d62-8671-b092ab8c8fa7-operator-scripts\") pod \"mysqld-exporter-6839-account-create-update-qx8bd\" (UID: \"1e284ba5-1197-4d62-8671-b092ab8c8fa7\") " pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.886564 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e284ba5-1197-4d62-8671-b092ab8c8fa7-operator-scripts\") pod \"mysqld-exporter-6839-account-create-update-qx8bd\" (UID: \"1e284ba5-1197-4d62-8671-b092ab8c8fa7\") " pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.887398 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e284ba5-1197-4d62-8671-b092ab8c8fa7-operator-scripts\") pod \"mysqld-exporter-6839-account-create-update-qx8bd\" (UID: \"1e284ba5-1197-4d62-8671-b092ab8c8fa7\") " pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.887414 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmvmf\" (UniqueName: \"kubernetes.io/projected/1e284ba5-1197-4d62-8671-b092ab8c8fa7-kube-api-access-nmvmf\") pod \"mysqld-exporter-6839-account-create-update-qx8bd\" (UID: \"1e284ba5-1197-4d62-8671-b092ab8c8fa7\") " pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.927940 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmvmf\" (UniqueName: \"kubernetes.io/projected/1e284ba5-1197-4d62-8671-b092ab8c8fa7-kube-api-access-nmvmf\") pod \"mysqld-exporter-6839-account-create-update-qx8bd\" (UID: \"1e284ba5-1197-4d62-8671-b092ab8c8fa7\") " pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" Nov 28 17:21:09 crc kubenswrapper[5024]: I1128 17:21:09.968776 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.143574 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-2gsjw"] Nov 28 17:21:10 crc kubenswrapper[5024]: W1128 17:21:10.183269 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2294c836_32c8_47eb_b5de_563fca6deda8.slice/crio-1b2c920781c6ad7684dba991ad9aef41df94e12ff784967c02a4a6f344223d90 WatchSource:0}: Error finding container 1b2c920781c6ad7684dba991ad9aef41df94e12ff784967c02a4a6f344223d90: Status 404 returned error can't find the container with id 1b2c920781c6ad7684dba991ad9aef41df94e12ff784967c02a4a6f344223d90 Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.351528 5024 generic.go:334] "Generic (PLEG): container finished" podID="37deb816-c36f-47c7-9d3a-c7373eabeb1f" containerID="f850c24e076d8610ed38159cf3435df6e50c6eb16ff897544655c63c11b33c0a" exitCode=0 Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.351681 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb6d-account-create-update-4b8k6" event={"ID":"37deb816-c36f-47c7-9d3a-c7373eabeb1f","Type":"ContainerDied","Data":"f850c24e076d8610ed38159cf3435df6e50c6eb16ff897544655c63c11b33c0a"} Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.352072 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb6d-account-create-update-4b8k6" event={"ID":"37deb816-c36f-47c7-9d3a-c7373eabeb1f","Type":"ContainerStarted","Data":"d6ecc8ffa50ecbf0389d05890ddaf7f1e7a8b94268d076586a14044878050cf7"} Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.355369 5024 generic.go:334] "Generic (PLEG): container finished" podID="9e9c6756-7897-48cb-a004-c8bfe09d4520" containerID="46e15d1669f9a19e098a4ea14066a2b4d2ecea5c13070966d30be8c5b603da65" exitCode=0 Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.355437 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4zgdn" event={"ID":"9e9c6756-7897-48cb-a004-c8bfe09d4520","Type":"ContainerDied","Data":"46e15d1669f9a19e098a4ea14066a2b4d2ecea5c13070966d30be8c5b603da65"} Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.355502 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4zgdn" event={"ID":"9e9c6756-7897-48cb-a004-c8bfe09d4520","Type":"ContainerStarted","Data":"7f5c6e37fa1b67871754bbd91856788a9aa771bc6ccee730a5040395ea042ef4"} Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.357741 5024 generic.go:334] "Generic (PLEG): container finished" podID="ac813be9-87ac-4fc7-b881-542716b8125d" containerID="2ca0bf6208c29d32dad01e286df5ecfc93bf7cb476a8b650a275be5e21ba6a80" exitCode=0 Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.357850 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-70f1-account-create-update-slf46" event={"ID":"ac813be9-87ac-4fc7-b881-542716b8125d","Type":"ContainerDied","Data":"2ca0bf6208c29d32dad01e286df5ecfc93bf7cb476a8b650a275be5e21ba6a80"} Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.357899 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-70f1-account-create-update-slf46" event={"ID":"ac813be9-87ac-4fc7-b881-542716b8125d","Type":"ContainerStarted","Data":"232340a43097df0abe3ecdb1712c112eb20bfce78221e644784c052760186a70"} Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.372542 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" event={"ID":"2294c836-32c8-47eb-b5de-563fca6deda8","Type":"ContainerStarted","Data":"1b2c920781c6ad7684dba991ad9aef41df94e12ff784967c02a4a6f344223d90"} Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.398649 5024 generic.go:334] "Generic (PLEG): container finished" podID="d5b69e2a-d3f0-49f6-badd-92d6a30ba281" containerID="64f48d753e331b8cffbccf3a0347aa314e57da4224d458651b2a3bc338fe147d" exitCode=0 Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.398792 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8rngl" event={"ID":"d5b69e2a-d3f0-49f6-badd-92d6a30ba281","Type":"ContainerDied","Data":"64f48d753e331b8cffbccf3a0347aa314e57da4224d458651b2a3bc338fe147d"} Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.398826 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8rngl" event={"ID":"d5b69e2a-d3f0-49f6-badd-92d6a30ba281","Type":"ContainerStarted","Data":"e103f8b4c4675fa570d5b96b217fc4c73b2746714eb7fed1f50d3d74df1dffef"} Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.423381 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4ff0447c-7f25-4d0a-a58b-d5fff6673749","Type":"ContainerStarted","Data":"042e29d2d13a524744627e8b110983ff756ba0d1464101321d53a29aaad3205b"} Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.499293 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.921373371 podStartE2EDuration="9.499263651s" podCreationTimestamp="2025-11-28 17:21:01 +0000 UTC" firstStartedPulling="2025-11-28 17:21:02.677284181 +0000 UTC m=+1364.726205086" lastFinishedPulling="2025-11-28 17:21:08.255174451 +0000 UTC m=+1370.304095366" observedRunningTime="2025-11-28 17:21:10.476892183 +0000 UTC m=+1372.525813088" watchObservedRunningTime="2025-11-28 17:21:10.499263651 +0000 UTC m=+1372.548184556" Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.663425 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b" path="/var/lib/kubelet/pods/24c3d9a5-75c8-4dda-a1b5-51f92ac9d59b/volumes" Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.753127 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-6839-account-create-update-qx8bd"] Nov 28 17:21:10 crc kubenswrapper[5024]: I1128 17:21:10.899195 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.024611 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bg67g"] Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.025168 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" podUID="c9dff956-8c29-446a-b6a9-f64ec4ea58b2" containerName="dnsmasq-dns" containerID="cri-o://8565e775bfbde208e7f91d72747da59fe89a02a78eaaad0fa6a2248e38157fed" gracePeriod=10 Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.441417 5024 generic.go:334] "Generic (PLEG): container finished" podID="77c4107c-2b4b-46f2-bf47-ccf384504fb1" containerID="c3b5a1aa90443da628b90d142e2f8a9bccbde23e09a695bbc71f26b48cf035f4" exitCode=0 Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.441496 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"77c4107c-2b4b-46f2-bf47-ccf384504fb1","Type":"ContainerDied","Data":"c3b5a1aa90443da628b90d142e2f8a9bccbde23e09a695bbc71f26b48cf035f4"} Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.454217 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" event={"ID":"1e284ba5-1197-4d62-8671-b092ab8c8fa7","Type":"ContainerStarted","Data":"13afd1e3647203038a7464ee5221ccfbbdd7be4a735ffd09ec1ac782000a2473"} Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.454601 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" event={"ID":"1e284ba5-1197-4d62-8671-b092ab8c8fa7","Type":"ContainerStarted","Data":"b24d9d710295595996382ee965b2945a9932dc7d22b576a76ce3b4ed6ec16525"} Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.457827 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" event={"ID":"2294c836-32c8-47eb-b5de-563fca6deda8","Type":"ContainerStarted","Data":"88b9b26a666698ecc9f86da90a1380bdc927d2dd0d6ece467a9f1f7f1c3719f6"} Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.485152 5024 generic.go:334] "Generic (PLEG): container finished" podID="c9dff956-8c29-446a-b6a9-f64ec4ea58b2" containerID="8565e775bfbde208e7f91d72747da59fe89a02a78eaaad0fa6a2248e38157fed" exitCode=0 Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.485279 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" event={"ID":"c9dff956-8c29-446a-b6a9-f64ec4ea58b2","Type":"ContainerDied","Data":"8565e775bfbde208e7f91d72747da59fe89a02a78eaaad0fa6a2248e38157fed"} Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.488914 5024 generic.go:334] "Generic (PLEG): container finished" podID="8a996fd8-35ac-41d9-a490-71dc31fa0686" containerID="2f6b28b4e0fe7ad569560c585bb13a5380c148687f58ad9278aaa037f4e7db11" exitCode=0 Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.489123 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a996fd8-35ac-41d9-a490-71dc31fa0686","Type":"ContainerDied","Data":"2f6b28b4e0fe7ad569560c585bb13a5380c148687f58ad9278aaa037f4e7db11"} Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.489851 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.528291 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" podStartSLOduration=2.528271563 podStartE2EDuration="2.528271563s" podCreationTimestamp="2025-11-28 17:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:11.520804267 +0000 UTC m=+1373.569725172" watchObservedRunningTime="2025-11-28 17:21:11.528271563 +0000 UTC m=+1373.577192468" Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.580143 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" podStartSLOduration=2.580116438 podStartE2EDuration="2.580116438s" podCreationTimestamp="2025-11-28 17:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:11.541685966 +0000 UTC m=+1373.590606871" watchObservedRunningTime="2025-11-28 17:21:11.580116438 +0000 UTC m=+1373.629037343" Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.833261 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.872551 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25v79\" (UniqueName: \"kubernetes.io/projected/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-kube-api-access-25v79\") pod \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\" (UID: \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\") " Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.872634 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-config\") pod \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\" (UID: \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\") " Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.872690 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-dns-svc\") pod \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\" (UID: \"c9dff956-8c29-446a-b6a9-f64ec4ea58b2\") " Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.884239 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-kube-api-access-25v79" (OuterVolumeSpecName: "kube-api-access-25v79") pod "c9dff956-8c29-446a-b6a9-f64ec4ea58b2" (UID: "c9dff956-8c29-446a-b6a9-f64ec4ea58b2"). InnerVolumeSpecName "kube-api-access-25v79". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.955175 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9dff956-8c29-446a-b6a9-f64ec4ea58b2" (UID: "c9dff956-8c29-446a-b6a9-f64ec4ea58b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.967800 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-config" (OuterVolumeSpecName: "config") pod "c9dff956-8c29-446a-b6a9-f64ec4ea58b2" (UID: "c9dff956-8c29-446a-b6a9-f64ec4ea58b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.983236 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25v79\" (UniqueName: \"kubernetes.io/projected/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-kube-api-access-25v79\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.983277 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:11 crc kubenswrapper[5024]: I1128 17:21:11.983289 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9dff956-8c29-446a-b6a9-f64ec4ea58b2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.187884 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb6d-account-create-update-4b8k6" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.243328 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-70f1-account-create-update-slf46" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.292963 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvfmw\" (UniqueName: \"kubernetes.io/projected/37deb816-c36f-47c7-9d3a-c7373eabeb1f-kube-api-access-vvfmw\") pod \"37deb816-c36f-47c7-9d3a-c7373eabeb1f\" (UID: \"37deb816-c36f-47c7-9d3a-c7373eabeb1f\") " Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.293080 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac813be9-87ac-4fc7-b881-542716b8125d-operator-scripts\") pod \"ac813be9-87ac-4fc7-b881-542716b8125d\" (UID: \"ac813be9-87ac-4fc7-b881-542716b8125d\") " Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.293238 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j766\" (UniqueName: \"kubernetes.io/projected/ac813be9-87ac-4fc7-b881-542716b8125d-kube-api-access-5j766\") pod \"ac813be9-87ac-4fc7-b881-542716b8125d\" (UID: \"ac813be9-87ac-4fc7-b881-542716b8125d\") " Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.293334 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37deb816-c36f-47c7-9d3a-c7373eabeb1f-operator-scripts\") pod \"37deb816-c36f-47c7-9d3a-c7373eabeb1f\" (UID: \"37deb816-c36f-47c7-9d3a-c7373eabeb1f\") " Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.294319 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37deb816-c36f-47c7-9d3a-c7373eabeb1f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37deb816-c36f-47c7-9d3a-c7373eabeb1f" (UID: "37deb816-c36f-47c7-9d3a-c7373eabeb1f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.294326 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac813be9-87ac-4fc7-b881-542716b8125d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac813be9-87ac-4fc7-b881-542716b8125d" (UID: "ac813be9-87ac-4fc7-b881-542716b8125d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.303197 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37deb816-c36f-47c7-9d3a-c7373eabeb1f-kube-api-access-vvfmw" (OuterVolumeSpecName: "kube-api-access-vvfmw") pod "37deb816-c36f-47c7-9d3a-c7373eabeb1f" (UID: "37deb816-c36f-47c7-9d3a-c7373eabeb1f"). InnerVolumeSpecName "kube-api-access-vvfmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.303280 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac813be9-87ac-4fc7-b881-542716b8125d-kube-api-access-5j766" (OuterVolumeSpecName: "kube-api-access-5j766") pod "ac813be9-87ac-4fc7-b881-542716b8125d" (UID: "ac813be9-87ac-4fc7-b881-542716b8125d"). InnerVolumeSpecName "kube-api-access-5j766". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.361119 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8rngl" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.395574 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6hmm\" (UniqueName: \"kubernetes.io/projected/d5b69e2a-d3f0-49f6-badd-92d6a30ba281-kube-api-access-f6hmm\") pod \"d5b69e2a-d3f0-49f6-badd-92d6a30ba281\" (UID: \"d5b69e2a-d3f0-49f6-badd-92d6a30ba281\") " Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.395645 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b69e2a-d3f0-49f6-badd-92d6a30ba281-operator-scripts\") pod \"d5b69e2a-d3f0-49f6-badd-92d6a30ba281\" (UID: \"d5b69e2a-d3f0-49f6-badd-92d6a30ba281\") " Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.396202 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4zgdn" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.397104 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvfmw\" (UniqueName: \"kubernetes.io/projected/37deb816-c36f-47c7-9d3a-c7373eabeb1f-kube-api-access-vvfmw\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.397121 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac813be9-87ac-4fc7-b881-542716b8125d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.397130 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j766\" (UniqueName: \"kubernetes.io/projected/ac813be9-87ac-4fc7-b881-542716b8125d-kube-api-access-5j766\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.397140 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37deb816-c36f-47c7-9d3a-c7373eabeb1f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.397497 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b69e2a-d3f0-49f6-badd-92d6a30ba281-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5b69e2a-d3f0-49f6-badd-92d6a30ba281" (UID: "d5b69e2a-d3f0-49f6-badd-92d6a30ba281"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.399655 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b69e2a-d3f0-49f6-badd-92d6a30ba281-kube-api-access-f6hmm" (OuterVolumeSpecName: "kube-api-access-f6hmm") pod "d5b69e2a-d3f0-49f6-badd-92d6a30ba281" (UID: "d5b69e2a-d3f0-49f6-badd-92d6a30ba281"). InnerVolumeSpecName "kube-api-access-f6hmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.514962 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6hmm\" (UniqueName: \"kubernetes.io/projected/d5b69e2a-d3f0-49f6-badd-92d6a30ba281-kube-api-access-f6hmm\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.515260 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b69e2a-d3f0-49f6-badd-92d6a30ba281-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.552255 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb6d-account-create-update-4b8k6" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.560842 5024 generic.go:334] "Generic (PLEG): container finished" podID="2294c836-32c8-47eb-b5de-563fca6deda8" containerID="88b9b26a666698ecc9f86da90a1380bdc927d2dd0d6ece467a9f1f7f1c3719f6" exitCode=0 Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.567726 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8rngl" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.576442 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4zgdn" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.580447 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-70f1-account-create-update-slf46" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.610625 5024 generic.go:334] "Generic (PLEG): container finished" podID="1e284ba5-1197-4d62-8671-b092ab8c8fa7" containerID="13afd1e3647203038a7464ee5221ccfbbdd7be4a735ffd09ec1ac782000a2473" exitCode=0 Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.610905 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a996fd8-35ac-41d9-a490-71dc31fa0686","Type":"ContainerStarted","Data":"1c04bd302d66be42cdcba39a29ea4cd5ba7672183ac7b7d67961cbbd0d65032b"} Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.611154 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb6d-account-create-update-4b8k6" event={"ID":"37deb816-c36f-47c7-9d3a-c7373eabeb1f","Type":"ContainerDied","Data":"d6ecc8ffa50ecbf0389d05890ddaf7f1e7a8b94268d076586a14044878050cf7"} Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.611233 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6ecc8ffa50ecbf0389d05890ddaf7f1e7a8b94268d076586a14044878050cf7" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.611295 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" event={"ID":"2294c836-32c8-47eb-b5de-563fca6deda8","Type":"ContainerDied","Data":"88b9b26a666698ecc9f86da90a1380bdc927d2dd0d6ece467a9f1f7f1c3719f6"} Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.611391 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8rngl" event={"ID":"d5b69e2a-d3f0-49f6-badd-92d6a30ba281","Type":"ContainerDied","Data":"e103f8b4c4675fa570d5b96b217fc4c73b2746714eb7fed1f50d3d74df1dffef"} Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.611463 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e103f8b4c4675fa570d5b96b217fc4c73b2746714eb7fed1f50d3d74df1dffef" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.611543 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4zgdn" event={"ID":"9e9c6756-7897-48cb-a004-c8bfe09d4520","Type":"ContainerDied","Data":"7f5c6e37fa1b67871754bbd91856788a9aa771bc6ccee730a5040395ea042ef4"} Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.611622 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f5c6e37fa1b67871754bbd91856788a9aa771bc6ccee730a5040395ea042ef4" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.611682 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-70f1-account-create-update-slf46" event={"ID":"ac813be9-87ac-4fc7-b881-542716b8125d","Type":"ContainerDied","Data":"232340a43097df0abe3ecdb1712c112eb20bfce78221e644784c052760186a70"} Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.611750 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="232340a43097df0abe3ecdb1712c112eb20bfce78221e644784c052760186a70" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.611807 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"77c4107c-2b4b-46f2-bf47-ccf384504fb1","Type":"ContainerStarted","Data":"9021ac8633b92acd690b0c8d7fd0ed0c5282b11539876fd1592284fbf1565145"} Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.611875 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" event={"ID":"1e284ba5-1197-4d62-8671-b092ab8c8fa7","Type":"ContainerDied","Data":"13afd1e3647203038a7464ee5221ccfbbdd7be4a735ffd09ec1ac782000a2473"} Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.617103 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.617962 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.619122 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84nmn\" (UniqueName: \"kubernetes.io/projected/9e9c6756-7897-48cb-a004-c8bfe09d4520-kube-api-access-84nmn\") pod \"9e9c6756-7897-48cb-a004-c8bfe09d4520\" (UID: \"9e9c6756-7897-48cb-a004-c8bfe09d4520\") " Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.619169 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9c6756-7897-48cb-a004-c8bfe09d4520-operator-scripts\") pod \"9e9c6756-7897-48cb-a004-c8bfe09d4520\" (UID: \"9e9c6756-7897-48cb-a004-c8bfe09d4520\") " Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.620642 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9c6756-7897-48cb-a004-c8bfe09d4520-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e9c6756-7897-48cb-a004-c8bfe09d4520" (UID: "9e9c6756-7897-48cb-a004-c8bfe09d4520"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.635797 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9c6756-7897-48cb-a004-c8bfe09d4520-kube-api-access-84nmn" (OuterVolumeSpecName: "kube-api-access-84nmn") pod "9e9c6756-7897-48cb-a004-c8bfe09d4520" (UID: "9e9c6756-7897-48cb-a004-c8bfe09d4520"). InnerVolumeSpecName "kube-api-access-84nmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.639177 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.640890 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bg67g" event={"ID":"c9dff956-8c29-446a-b6a9-f64ec4ea58b2","Type":"ContainerDied","Data":"9db438f6bbecabe1dcf8aae6c6d61b2717866d9fe8599d47103d5ea05fcaf8fc"} Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.640948 5024 scope.go:117] "RemoveContainer" containerID="8565e775bfbde208e7f91d72747da59fe89a02a78eaaad0fa6a2248e38157fed" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.673459 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gkzs2"] Nov 28 17:21:12 crc kubenswrapper[5024]: E1128 17:21:12.673908 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9c6756-7897-48cb-a004-c8bfe09d4520" containerName="mariadb-database-create" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.673928 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9c6756-7897-48cb-a004-c8bfe09d4520" containerName="mariadb-database-create" Nov 28 17:21:12 crc kubenswrapper[5024]: E1128 17:21:12.673942 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9dff956-8c29-446a-b6a9-f64ec4ea58b2" containerName="dnsmasq-dns" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.673950 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9dff956-8c29-446a-b6a9-f64ec4ea58b2" containerName="dnsmasq-dns" Nov 28 17:21:12 crc kubenswrapper[5024]: E1128 17:21:12.673967 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b69e2a-d3f0-49f6-badd-92d6a30ba281" containerName="mariadb-database-create" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.673973 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b69e2a-d3f0-49f6-badd-92d6a30ba281" containerName="mariadb-database-create" Nov 28 17:21:12 crc kubenswrapper[5024]: E1128 17:21:12.673999 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac813be9-87ac-4fc7-b881-542716b8125d" containerName="mariadb-account-create-update" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.674005 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac813be9-87ac-4fc7-b881-542716b8125d" containerName="mariadb-account-create-update" Nov 28 17:21:12 crc kubenswrapper[5024]: E1128 17:21:12.674033 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9dff956-8c29-446a-b6a9-f64ec4ea58b2" containerName="init" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.674039 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9dff956-8c29-446a-b6a9-f64ec4ea58b2" containerName="init" Nov 28 17:21:12 crc kubenswrapper[5024]: E1128 17:21:12.674054 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37deb816-c36f-47c7-9d3a-c7373eabeb1f" containerName="mariadb-account-create-update" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.674060 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="37deb816-c36f-47c7-9d3a-c7373eabeb1f" containerName="mariadb-account-create-update" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.674264 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9dff956-8c29-446a-b6a9-f64ec4ea58b2" containerName="dnsmasq-dns" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.674277 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b69e2a-d3f0-49f6-badd-92d6a30ba281" containerName="mariadb-database-create" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.674289 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="37deb816-c36f-47c7-9d3a-c7373eabeb1f" containerName="mariadb-account-create-update" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.674303 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9c6756-7897-48cb-a004-c8bfe09d4520" containerName="mariadb-database-create" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.674318 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac813be9-87ac-4fc7-b881-542716b8125d" containerName="mariadb-account-create-update" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.675144 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gkzs2" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.688783 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.793022409 podStartE2EDuration="1m0.688759236s" podCreationTimestamp="2025-11-28 17:20:12 +0000 UTC" firstStartedPulling="2025-11-28 17:20:14.656255741 +0000 UTC m=+1316.705176646" lastFinishedPulling="2025-11-28 17:20:37.551992568 +0000 UTC m=+1339.600913473" observedRunningTime="2025-11-28 17:21:12.665431242 +0000 UTC m=+1374.714352147" watchObservedRunningTime="2025-11-28 17:21:12.688759236 +0000 UTC m=+1374.737680131" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.694068 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3307-account-create-update-s4hhz"] Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.696949 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3307-account-create-update-s4hhz" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.704510 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.708311 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3307-account-create-update-s4hhz"] Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.721128 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84nmn\" (UniqueName: \"kubernetes.io/projected/9e9c6756-7897-48cb-a004-c8bfe09d4520-kube-api-access-84nmn\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.721173 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9c6756-7897-48cb-a004-c8bfe09d4520-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.731186 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gkzs2"] Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.738450 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.534420053 podStartE2EDuration="1m0.738430013s" podCreationTimestamp="2025-11-28 17:20:12 +0000 UTC" firstStartedPulling="2025-11-28 17:20:14.291988854 +0000 UTC m=+1316.340909759" lastFinishedPulling="2025-11-28 17:20:37.495998814 +0000 UTC m=+1339.544919719" observedRunningTime="2025-11-28 17:21:12.694738053 +0000 UTC m=+1374.743658958" watchObservedRunningTime="2025-11-28 17:21:12.738430013 +0000 UTC m=+1374.787350928" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.824047 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730c1e44-786f-4f58-b6fd-bbc27112ed73-operator-scripts\") pod \"glance-db-create-gkzs2\" (UID: \"730c1e44-786f-4f58-b6fd-bbc27112ed73\") " pod="openstack/glance-db-create-gkzs2" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.824183 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c41427-dbc5-4f74-a83d-021976f51327-operator-scripts\") pod \"glance-3307-account-create-update-s4hhz\" (UID: \"e8c41427-dbc5-4f74-a83d-021976f51327\") " pod="openstack/glance-3307-account-create-update-s4hhz" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.824326 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5nxx\" (UniqueName: \"kubernetes.io/projected/e8c41427-dbc5-4f74-a83d-021976f51327-kube-api-access-k5nxx\") pod \"glance-3307-account-create-update-s4hhz\" (UID: \"e8c41427-dbc5-4f74-a83d-021976f51327\") " pod="openstack/glance-3307-account-create-update-s4hhz" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.825274 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58q75\" (UniqueName: \"kubernetes.io/projected/730c1e44-786f-4f58-b6fd-bbc27112ed73-kube-api-access-58q75\") pod \"glance-db-create-gkzs2\" (UID: \"730c1e44-786f-4f58-b6fd-bbc27112ed73\") " pod="openstack/glance-db-create-gkzs2" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.928207 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5nxx\" (UniqueName: \"kubernetes.io/projected/e8c41427-dbc5-4f74-a83d-021976f51327-kube-api-access-k5nxx\") pod \"glance-3307-account-create-update-s4hhz\" (UID: \"e8c41427-dbc5-4f74-a83d-021976f51327\") " pod="openstack/glance-3307-account-create-update-s4hhz" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.929341 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58q75\" (UniqueName: \"kubernetes.io/projected/730c1e44-786f-4f58-b6fd-bbc27112ed73-kube-api-access-58q75\") pod \"glance-db-create-gkzs2\" (UID: \"730c1e44-786f-4f58-b6fd-bbc27112ed73\") " pod="openstack/glance-db-create-gkzs2" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.930934 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730c1e44-786f-4f58-b6fd-bbc27112ed73-operator-scripts\") pod \"glance-db-create-gkzs2\" (UID: \"730c1e44-786f-4f58-b6fd-bbc27112ed73\") " pod="openstack/glance-db-create-gkzs2" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.929508 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730c1e44-786f-4f58-b6fd-bbc27112ed73-operator-scripts\") pod \"glance-db-create-gkzs2\" (UID: \"730c1e44-786f-4f58-b6fd-bbc27112ed73\") " pod="openstack/glance-db-create-gkzs2" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.931270 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c41427-dbc5-4f74-a83d-021976f51327-operator-scripts\") pod \"glance-3307-account-create-update-s4hhz\" (UID: \"e8c41427-dbc5-4f74-a83d-021976f51327\") " pod="openstack/glance-3307-account-create-update-s4hhz" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.932270 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c41427-dbc5-4f74-a83d-021976f51327-operator-scripts\") pod \"glance-3307-account-create-update-s4hhz\" (UID: \"e8c41427-dbc5-4f74-a83d-021976f51327\") " pod="openstack/glance-3307-account-create-update-s4hhz" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.951143 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58q75\" (UniqueName: \"kubernetes.io/projected/730c1e44-786f-4f58-b6fd-bbc27112ed73-kube-api-access-58q75\") pod \"glance-db-create-gkzs2\" (UID: \"730c1e44-786f-4f58-b6fd-bbc27112ed73\") " pod="openstack/glance-db-create-gkzs2" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.952536 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5nxx\" (UniqueName: \"kubernetes.io/projected/e8c41427-dbc5-4f74-a83d-021976f51327-kube-api-access-k5nxx\") pod \"glance-3307-account-create-update-s4hhz\" (UID: \"e8c41427-dbc5-4f74-a83d-021976f51327\") " pod="openstack/glance-3307-account-create-update-s4hhz" Nov 28 17:21:12 crc kubenswrapper[5024]: I1128 17:21:12.967441 5024 scope.go:117] "RemoveContainer" containerID="c04b8e3752a2dcdf3f616dc2040e1ca59520d2b94cceac4080c744d5a8dbfef1" Nov 28 17:21:13 crc kubenswrapper[5024]: I1128 17:21:13.122425 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gkzs2" Nov 28 17:21:13 crc kubenswrapper[5024]: I1128 17:21:13.125122 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3307-account-create-update-s4hhz" Nov 28 17:21:13 crc kubenswrapper[5024]: I1128 17:21:13.144322 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bg67g"] Nov 28 17:21:13 crc kubenswrapper[5024]: I1128 17:21:13.157506 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bg67g"] Nov 28 17:21:13 crc kubenswrapper[5024]: I1128 17:21:13.813481 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gkzs2"] Nov 28 17:21:13 crc kubenswrapper[5024]: I1128 17:21:13.914729 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3307-account-create-update-s4hhz"] Nov 28 17:21:13 crc kubenswrapper[5024]: W1128 17:21:13.923880 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8c41427_dbc5_4f74_a83d_021976f51327.slice/crio-53e5afc4c4a79f6f6a459d410213074217ec24eb7a15e57a9945403836898336 WatchSource:0}: Error finding container 53e5afc4c4a79f6f6a459d410213074217ec24eb7a15e57a9945403836898336: Status 404 returned error can't find the container with id 53e5afc4c4a79f6f6a459d410213074217ec24eb7a15e57a9945403836898336 Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.241077 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.282235 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmvmf\" (UniqueName: \"kubernetes.io/projected/1e284ba5-1197-4d62-8671-b092ab8c8fa7-kube-api-access-nmvmf\") pod \"1e284ba5-1197-4d62-8671-b092ab8c8fa7\" (UID: \"1e284ba5-1197-4d62-8671-b092ab8c8fa7\") " Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.282382 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e284ba5-1197-4d62-8671-b092ab8c8fa7-operator-scripts\") pod \"1e284ba5-1197-4d62-8671-b092ab8c8fa7\" (UID: \"1e284ba5-1197-4d62-8671-b092ab8c8fa7\") " Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.283942 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e284ba5-1197-4d62-8671-b092ab8c8fa7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e284ba5-1197-4d62-8671-b092ab8c8fa7" (UID: "1e284ba5-1197-4d62-8671-b092ab8c8fa7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.290827 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e284ba5-1197-4d62-8671-b092ab8c8fa7-kube-api-access-nmvmf" (OuterVolumeSpecName: "kube-api-access-nmvmf") pod "1e284ba5-1197-4d62-8671-b092ab8c8fa7" (UID: "1e284ba5-1197-4d62-8671-b092ab8c8fa7"). InnerVolumeSpecName "kube-api-access-nmvmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.311498 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.415184 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqvz8\" (UniqueName: \"kubernetes.io/projected/2294c836-32c8-47eb-b5de-563fca6deda8-kube-api-access-rqvz8\") pod \"2294c836-32c8-47eb-b5de-563fca6deda8\" (UID: \"2294c836-32c8-47eb-b5de-563fca6deda8\") " Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.415382 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2294c836-32c8-47eb-b5de-563fca6deda8-operator-scripts\") pod \"2294c836-32c8-47eb-b5de-563fca6deda8\" (UID: \"2294c836-32c8-47eb-b5de-563fca6deda8\") " Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.416003 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e284ba5-1197-4d62-8671-b092ab8c8fa7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.416042 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmvmf\" (UniqueName: \"kubernetes.io/projected/1e284ba5-1197-4d62-8671-b092ab8c8fa7-kube-api-access-nmvmf\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.416500 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2294c836-32c8-47eb-b5de-563fca6deda8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2294c836-32c8-47eb-b5de-563fca6deda8" (UID: "2294c836-32c8-47eb-b5de-563fca6deda8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.460242 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2294c836-32c8-47eb-b5de-563fca6deda8-kube-api-access-rqvz8" (OuterVolumeSpecName: "kube-api-access-rqvz8") pod "2294c836-32c8-47eb-b5de-563fca6deda8" (UID: "2294c836-32c8-47eb-b5de-563fca6deda8"). InnerVolumeSpecName "kube-api-access-rqvz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.518159 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2294c836-32c8-47eb-b5de-563fca6deda8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.518650 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqvz8\" (UniqueName: \"kubernetes.io/projected/2294c836-32c8-47eb-b5de-563fca6deda8-kube-api-access-rqvz8\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.524773 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9dff956-8c29-446a-b6a9-f64ec4ea58b2" path="/var/lib/kubelet/pods/c9dff956-8c29-446a-b6a9-f64ec4ea58b2/volumes" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.692858 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3307-account-create-update-s4hhz" event={"ID":"e8c41427-dbc5-4f74-a83d-021976f51327","Type":"ContainerStarted","Data":"1113b52b607e7fe2e78906bb79b9220a97bceb97968caffcdbc3bda892c56303"} Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.692913 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3307-account-create-update-s4hhz" event={"ID":"e8c41427-dbc5-4f74-a83d-021976f51327","Type":"ContainerStarted","Data":"53e5afc4c4a79f6f6a459d410213074217ec24eb7a15e57a9945403836898336"} Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.695248 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gkzs2" event={"ID":"730c1e44-786f-4f58-b6fd-bbc27112ed73","Type":"ContainerStarted","Data":"eb58e6e86e5c9bf1ccacf44ebc52dedba9f91145b4496dcdeaf6d21db6861ab9"} Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.695285 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gkzs2" event={"ID":"730c1e44-786f-4f58-b6fd-bbc27112ed73","Type":"ContainerStarted","Data":"201a4c50fb19c7f7e2812874cabe8d62f7199bbf6039b0d42d5fe86d5835f69c"} Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.697763 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" event={"ID":"1e284ba5-1197-4d62-8671-b092ab8c8fa7","Type":"ContainerDied","Data":"b24d9d710295595996382ee965b2945a9932dc7d22b576a76ce3b4ed6ec16525"} Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.697791 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b24d9d710295595996382ee965b2945a9932dc7d22b576a76ce3b4ed6ec16525" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.697835 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6839-account-create-update-qx8bd" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.699971 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" event={"ID":"2294c836-32c8-47eb-b5de-563fca6deda8","Type":"ContainerDied","Data":"1b2c920781c6ad7684dba991ad9aef41df94e12ff784967c02a4a6f344223d90"} Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.699995 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b2c920781c6ad7684dba991ad9aef41df94e12ff784967c02a4a6f344223d90" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.700056 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-2gsjw" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.722871 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-3307-account-create-update-s4hhz" podStartSLOduration=2.722846049 podStartE2EDuration="2.722846049s" podCreationTimestamp="2025-11-28 17:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:14.71223668 +0000 UTC m=+1376.761157585" watchObservedRunningTime="2025-11-28 17:21:14.722846049 +0000 UTC m=+1376.771766954" Nov 28 17:21:14 crc kubenswrapper[5024]: I1128 17:21:14.750106 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-gkzs2" podStartSLOduration=2.750084476 podStartE2EDuration="2.750084476s" podCreationTimestamp="2025-11-28 17:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:14.74113645 +0000 UTC m=+1376.790057355" watchObservedRunningTime="2025-11-28 17:21:14.750084476 +0000 UTC m=+1376.799005381" Nov 28 17:21:15 crc kubenswrapper[5024]: I1128 17:21:15.714453 5024 generic.go:334] "Generic (PLEG): container finished" podID="e8c41427-dbc5-4f74-a83d-021976f51327" containerID="1113b52b607e7fe2e78906bb79b9220a97bceb97968caffcdbc3bda892c56303" exitCode=0 Nov 28 17:21:15 crc kubenswrapper[5024]: I1128 17:21:15.714834 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3307-account-create-update-s4hhz" event={"ID":"e8c41427-dbc5-4f74-a83d-021976f51327","Type":"ContainerDied","Data":"1113b52b607e7fe2e78906bb79b9220a97bceb97968caffcdbc3bda892c56303"} Nov 28 17:21:15 crc kubenswrapper[5024]: I1128 17:21:15.718468 5024 generic.go:334] "Generic (PLEG): container finished" podID="730c1e44-786f-4f58-b6fd-bbc27112ed73" containerID="eb58e6e86e5c9bf1ccacf44ebc52dedba9f91145b4496dcdeaf6d21db6861ab9" exitCode=0 Nov 28 17:21:15 crc kubenswrapper[5024]: I1128 17:21:15.718510 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gkzs2" event={"ID":"730c1e44-786f-4f58-b6fd-bbc27112ed73","Type":"ContainerDied","Data":"eb58e6e86e5c9bf1ccacf44ebc52dedba9f91145b4496dcdeaf6d21db6861ab9"} Nov 28 17:21:16 crc kubenswrapper[5024]: I1128 17:21:16.667284 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:16 crc kubenswrapper[5024]: E1128 17:21:16.667545 5024 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 17:21:16 crc kubenswrapper[5024]: E1128 17:21:16.667579 5024 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 17:21:16 crc kubenswrapper[5024]: E1128 17:21:16.667640 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift podName:aa2554f8-7d4e-425d-a74a-3322dc09d7ed nodeName:}" failed. No retries permitted until 2025-11-28 17:21:32.667622783 +0000 UTC m=+1394.716543688 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift") pod "swift-storage-0" (UID: "aa2554f8-7d4e-425d-a74a-3322dc09d7ed") : configmap "swift-ring-files" not found Nov 28 17:21:19 crc kubenswrapper[5024]: I1128 17:21:19.190631 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gwmd4" podUID="50b88778-9829-4418-bfc4-a7377039d584" containerName="ovn-controller" probeResult="failure" output=< Nov 28 17:21:19 crc kubenswrapper[5024]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 17:21:19 crc kubenswrapper[5024]: > Nov 28 17:21:19 crc kubenswrapper[5024]: I1128 17:21:19.203129 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:21:19 crc kubenswrapper[5024]: I1128 17:21:19.806222 5024 generic.go:334] "Generic (PLEG): container finished" podID="5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd" containerID="0e7f91149c3c427e2aed74cc4ee22ec87df9d9b073bcb55e83b7edac47adb2be" exitCode=0 Nov 28 17:21:19 crc kubenswrapper[5024]: I1128 17:21:19.806344 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hbk2s" event={"ID":"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd","Type":"ContainerDied","Data":"0e7f91149c3c427e2aed74cc4ee22ec87df9d9b073bcb55e83b7edac47adb2be"} Nov 28 17:21:19 crc kubenswrapper[5024]: I1128 17:21:19.824810 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz"] Nov 28 17:21:19 crc kubenswrapper[5024]: E1128 17:21:19.825351 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e284ba5-1197-4d62-8671-b092ab8c8fa7" containerName="mariadb-account-create-update" Nov 28 17:21:19 crc kubenswrapper[5024]: I1128 17:21:19.825369 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e284ba5-1197-4d62-8671-b092ab8c8fa7" containerName="mariadb-account-create-update" Nov 28 17:21:19 crc kubenswrapper[5024]: E1128 17:21:19.825396 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2294c836-32c8-47eb-b5de-563fca6deda8" containerName="mariadb-database-create" Nov 28 17:21:19 crc kubenswrapper[5024]: I1128 17:21:19.825403 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2294c836-32c8-47eb-b5de-563fca6deda8" containerName="mariadb-database-create" Nov 28 17:21:19 crc kubenswrapper[5024]: I1128 17:21:19.825623 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2294c836-32c8-47eb-b5de-563fca6deda8" containerName="mariadb-database-create" Nov 28 17:21:19 crc kubenswrapper[5024]: I1128 17:21:19.825636 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e284ba5-1197-4d62-8671-b092ab8c8fa7" containerName="mariadb-account-create-update" Nov 28 17:21:19 crc kubenswrapper[5024]: I1128 17:21:19.826443 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz" Nov 28 17:21:19 crc kubenswrapper[5024]: I1128 17:21:19.864479 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz"] Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.131812 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p96gw\" (UniqueName: \"kubernetes.io/projected/faa0ae71-201a-464c-ad32-6fc693cf3e62-kube-api-access-p96gw\") pod \"mysqld-exporter-openstack-cell1-db-create-6q8kz\" (UID: \"faa0ae71-201a-464c-ad32-6fc693cf3e62\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.131914 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa0ae71-201a-464c-ad32-6fc693cf3e62-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-6q8kz\" (UID: \"faa0ae71-201a-464c-ad32-6fc693cf3e62\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.219792 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-b5ef-account-create-update-ft92t"] Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.224985 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b5ef-account-create-update-ft92t" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.231356 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.233837 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p96gw\" (UniqueName: \"kubernetes.io/projected/faa0ae71-201a-464c-ad32-6fc693cf3e62-kube-api-access-p96gw\") pod \"mysqld-exporter-openstack-cell1-db-create-6q8kz\" (UID: \"faa0ae71-201a-464c-ad32-6fc693cf3e62\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.233926 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa0ae71-201a-464c-ad32-6fc693cf3e62-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-6q8kz\" (UID: \"faa0ae71-201a-464c-ad32-6fc693cf3e62\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.234777 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa0ae71-201a-464c-ad32-6fc693cf3e62-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-6q8kz\" (UID: \"faa0ae71-201a-464c-ad32-6fc693cf3e62\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.255163 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-b5ef-account-create-update-ft92t"] Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.262282 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p96gw\" (UniqueName: \"kubernetes.io/projected/faa0ae71-201a-464c-ad32-6fc693cf3e62-kube-api-access-p96gw\") pod \"mysqld-exporter-openstack-cell1-db-create-6q8kz\" (UID: \"faa0ae71-201a-464c-ad32-6fc693cf3e62\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.338707 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjcbp\" (UniqueName: \"kubernetes.io/projected/849dfda2-83a5-47f6-aca7-f25ff8136829-kube-api-access-cjcbp\") pod \"mysqld-exporter-b5ef-account-create-update-ft92t\" (UID: \"849dfda2-83a5-47f6-aca7-f25ff8136829\") " pod="openstack/mysqld-exporter-b5ef-account-create-update-ft92t" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.338946 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849dfda2-83a5-47f6-aca7-f25ff8136829-operator-scripts\") pod \"mysqld-exporter-b5ef-account-create-update-ft92t\" (UID: \"849dfda2-83a5-47f6-aca7-f25ff8136829\") " pod="openstack/mysqld-exporter-b5ef-account-create-update-ft92t" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.441111 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjcbp\" (UniqueName: \"kubernetes.io/projected/849dfda2-83a5-47f6-aca7-f25ff8136829-kube-api-access-cjcbp\") pod \"mysqld-exporter-b5ef-account-create-update-ft92t\" (UID: \"849dfda2-83a5-47f6-aca7-f25ff8136829\") " pod="openstack/mysqld-exporter-b5ef-account-create-update-ft92t" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.441372 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849dfda2-83a5-47f6-aca7-f25ff8136829-operator-scripts\") pod \"mysqld-exporter-b5ef-account-create-update-ft92t\" (UID: \"849dfda2-83a5-47f6-aca7-f25ff8136829\") " pod="openstack/mysqld-exporter-b5ef-account-create-update-ft92t" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.444733 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849dfda2-83a5-47f6-aca7-f25ff8136829-operator-scripts\") pod \"mysqld-exporter-b5ef-account-create-update-ft92t\" (UID: \"849dfda2-83a5-47f6-aca7-f25ff8136829\") " pod="openstack/mysqld-exporter-b5ef-account-create-update-ft92t" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.477682 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjcbp\" (UniqueName: \"kubernetes.io/projected/849dfda2-83a5-47f6-aca7-f25ff8136829-kube-api-access-cjcbp\") pod \"mysqld-exporter-b5ef-account-create-update-ft92t\" (UID: \"849dfda2-83a5-47f6-aca7-f25ff8136829\") " pod="openstack/mysqld-exporter-b5ef-account-create-update-ft92t" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.487152 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz" Nov 28 17:21:20 crc kubenswrapper[5024]: I1128 17:21:20.539917 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b5ef-account-create-update-ft92t" Nov 28 17:21:21 crc kubenswrapper[5024]: I1128 17:21:21.824292 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.579561 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.582226 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3307-account-create-update-s4hhz" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.610680 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gkzs2" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.779946 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-swiftconf\") pod \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.780050 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-dispersionconf\") pod \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.780080 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730c1e44-786f-4f58-b6fd-bbc27112ed73-operator-scripts\") pod \"730c1e44-786f-4f58-b6fd-bbc27112ed73\" (UID: \"730c1e44-786f-4f58-b6fd-bbc27112ed73\") " Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.780202 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-combined-ca-bundle\") pod \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.780254 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5nxx\" (UniqueName: \"kubernetes.io/projected/e8c41427-dbc5-4f74-a83d-021976f51327-kube-api-access-k5nxx\") pod \"e8c41427-dbc5-4f74-a83d-021976f51327\" (UID: \"e8c41427-dbc5-4f74-a83d-021976f51327\") " Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.780342 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58q75\" (UniqueName: \"kubernetes.io/projected/730c1e44-786f-4f58-b6fd-bbc27112ed73-kube-api-access-58q75\") pod \"730c1e44-786f-4f58-b6fd-bbc27112ed73\" (UID: \"730c1e44-786f-4f58-b6fd-bbc27112ed73\") " Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.780389 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c41427-dbc5-4f74-a83d-021976f51327-operator-scripts\") pod \"e8c41427-dbc5-4f74-a83d-021976f51327\" (UID: \"e8c41427-dbc5-4f74-a83d-021976f51327\") " Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.780415 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-ring-data-devices\") pod \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.780470 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-etc-swift\") pod \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.780530 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fc7g\" (UniqueName: \"kubernetes.io/projected/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-kube-api-access-9fc7g\") pod \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.780594 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-scripts\") pod \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\" (UID: \"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd\") " Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.785708 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd" (UID: "5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.789466 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c41427-dbc5-4f74-a83d-021976f51327-kube-api-access-k5nxx" (OuterVolumeSpecName: "kube-api-access-k5nxx") pod "e8c41427-dbc5-4f74-a83d-021976f51327" (UID: "e8c41427-dbc5-4f74-a83d-021976f51327"). InnerVolumeSpecName "kube-api-access-k5nxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.791114 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8c41427-dbc5-4f74-a83d-021976f51327-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8c41427-dbc5-4f74-a83d-021976f51327" (UID: "e8c41427-dbc5-4f74-a83d-021976f51327"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.791475 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/730c1e44-786f-4f58-b6fd-bbc27112ed73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "730c1e44-786f-4f58-b6fd-bbc27112ed73" (UID: "730c1e44-786f-4f58-b6fd-bbc27112ed73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.792311 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd" (UID: "5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.793572 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730c1e44-786f-4f58-b6fd-bbc27112ed73-kube-api-access-58q75" (OuterVolumeSpecName: "kube-api-access-58q75") pod "730c1e44-786f-4f58-b6fd-bbc27112ed73" (UID: "730c1e44-786f-4f58-b6fd-bbc27112ed73"). InnerVolumeSpecName "kube-api-access-58q75". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.797945 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd" (UID: "5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.803744 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-kube-api-access-9fc7g" (OuterVolumeSpecName: "kube-api-access-9fc7g") pod "5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd" (UID: "5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd"). InnerVolumeSpecName "kube-api-access-9fc7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.810977 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-scripts" (OuterVolumeSpecName: "scripts") pod "5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd" (UID: "5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.834106 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd" (UID: "5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.850679 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hbk2s" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.851283 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hbk2s" event={"ID":"5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd","Type":"ContainerDied","Data":"f360ebcc730c380c380be00c073ccbe0bd582bda2c94e0197d911ef4d7ea7709"} Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.851327 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f360ebcc730c380c380be00c073ccbe0bd582bda2c94e0197d911ef4d7ea7709" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.852740 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd" (UID: "5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.852858 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gkzs2" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.852875 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gkzs2" event={"ID":"730c1e44-786f-4f58-b6fd-bbc27112ed73","Type":"ContainerDied","Data":"201a4c50fb19c7f7e2812874cabe8d62f7199bbf6039b0d42d5fe86d5835f69c"} Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.852904 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="201a4c50fb19c7f7e2812874cabe8d62f7199bbf6039b0d42d5fe86d5835f69c" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.858554 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a8a5d6d-4404-4848-a8b9-d47cee1e350d","Type":"ContainerStarted","Data":"667f6207b0846c2aedd8b1a421128da49a0c1dbb6193ff0200162c220dcea269"} Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.863911 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3307-account-create-update-s4hhz" event={"ID":"e8c41427-dbc5-4f74-a83d-021976f51327","Type":"ContainerDied","Data":"53e5afc4c4a79f6f6a459d410213074217ec24eb7a15e57a9945403836898336"} Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.863956 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53e5afc4c4a79f6f6a459d410213074217ec24eb7a15e57a9945403836898336" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.864036 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3307-account-create-update-s4hhz" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.883288 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fc7g\" (UniqueName: \"kubernetes.io/projected/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-kube-api-access-9fc7g\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.883319 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.883328 5024 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.883338 5024 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.883348 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730c1e44-786f-4f58-b6fd-bbc27112ed73-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.883356 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.883367 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5nxx\" (UniqueName: \"kubernetes.io/projected/e8c41427-dbc5-4f74-a83d-021976f51327-kube-api-access-k5nxx\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.883376 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58q75\" (UniqueName: \"kubernetes.io/projected/730c1e44-786f-4f58-b6fd-bbc27112ed73-kube-api-access-58q75\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.883385 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c41427-dbc5-4f74-a83d-021976f51327-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.883396 5024 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.883406 5024 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:22 crc kubenswrapper[5024]: I1128 17:21:22.969119 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-b5ef-account-create-update-ft92t"] Nov 28 17:21:23 crc kubenswrapper[5024]: W1128 17:21:23.086749 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaa0ae71_201a_464c_ad32_6fc693cf3e62.slice/crio-c5567d7ab8ce156bd5ea8a5e66060149244c54b349699baacc117100cd64fdd3 WatchSource:0}: Error finding container c5567d7ab8ce156bd5ea8a5e66060149244c54b349699baacc117100cd64fdd3: Status 404 returned error can't find the container with id c5567d7ab8ce156bd5ea8a5e66060149244c54b349699baacc117100cd64fdd3 Nov 28 17:21:23 crc kubenswrapper[5024]: I1128 17:21:23.094691 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz"] Nov 28 17:21:23 crc kubenswrapper[5024]: I1128 17:21:23.646979 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8a996fd8-35ac-41d9-a490-71dc31fa0686" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 28 17:21:23 crc kubenswrapper[5024]: I1128 17:21:23.892151 5024 generic.go:334] "Generic (PLEG): container finished" podID="faa0ae71-201a-464c-ad32-6fc693cf3e62" containerID="6f7871d6a2d962d5a9b25bce4cf94999f3876ad3254202326076b5e65b127a66" exitCode=0 Nov 28 17:21:23 crc kubenswrapper[5024]: I1128 17:21:23.893196 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz" event={"ID":"faa0ae71-201a-464c-ad32-6fc693cf3e62","Type":"ContainerDied","Data":"6f7871d6a2d962d5a9b25bce4cf94999f3876ad3254202326076b5e65b127a66"} Nov 28 17:21:23 crc kubenswrapper[5024]: I1128 17:21:23.893363 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz" event={"ID":"faa0ae71-201a-464c-ad32-6fc693cf3e62","Type":"ContainerStarted","Data":"c5567d7ab8ce156bd5ea8a5e66060149244c54b349699baacc117100cd64fdd3"} Nov 28 17:21:23 crc kubenswrapper[5024]: I1128 17:21:23.894320 5024 generic.go:334] "Generic (PLEG): container finished" podID="849dfda2-83a5-47f6-aca7-f25ff8136829" containerID="fc17fef4ebcee3a3cf6546ef0fb903c3827789321d149215834c105ea9d0dcfe" exitCode=0 Nov 28 17:21:23 crc kubenswrapper[5024]: I1128 17:21:23.894421 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b5ef-account-create-update-ft92t" event={"ID":"849dfda2-83a5-47f6-aca7-f25ff8136829","Type":"ContainerDied","Data":"fc17fef4ebcee3a3cf6546ef0fb903c3827789321d149215834c105ea9d0dcfe"} Nov 28 17:21:23 crc kubenswrapper[5024]: I1128 17:21:23.894524 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b5ef-account-create-update-ft92t" event={"ID":"849dfda2-83a5-47f6-aca7-f25ff8136829","Type":"ContainerStarted","Data":"2a49382d11bffd5d4da0f602c106afc5c6d9a52c17b8893dd62eb012f9dc1291"} Nov 28 17:21:23 crc kubenswrapper[5024]: I1128 17:21:23.999453 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="77c4107c-2b4b-46f2-bf47-ccf384504fb1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.200825 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gwmd4" podUID="50b88778-9829-4418-bfc4-a7377039d584" containerName="ovn-controller" probeResult="failure" output=< Nov 28 17:21:24 crc kubenswrapper[5024]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 17:21:24 crc kubenswrapper[5024]: > Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.293058 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tst7t" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.524477 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gwmd4-config-6dsk2"] Nov 28 17:21:24 crc kubenswrapper[5024]: E1128 17:21:24.525434 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="730c1e44-786f-4f58-b6fd-bbc27112ed73" containerName="mariadb-database-create" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.525467 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="730c1e44-786f-4f58-b6fd-bbc27112ed73" containerName="mariadb-database-create" Nov 28 17:21:24 crc kubenswrapper[5024]: E1128 17:21:24.525492 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c41427-dbc5-4f74-a83d-021976f51327" containerName="mariadb-account-create-update" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.525500 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c41427-dbc5-4f74-a83d-021976f51327" containerName="mariadb-account-create-update" Nov 28 17:21:24 crc kubenswrapper[5024]: E1128 17:21:24.525521 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd" containerName="swift-ring-rebalance" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.525530 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd" containerName="swift-ring-rebalance" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.525769 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd" containerName="swift-ring-rebalance" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.525787 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="730c1e44-786f-4f58-b6fd-bbc27112ed73" containerName="mariadb-database-create" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.525808 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c41427-dbc5-4f74-a83d-021976f51327" containerName="mariadb-account-create-update" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.526924 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.528992 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.546391 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gwmd4-config-6dsk2"] Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.749563 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-log-ovn\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.749629 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-run-ovn\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.749682 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2471428c-2cd4-4f21-b70b-5af5fa7521f2-scripts\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.749855 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2471428c-2cd4-4f21-b70b-5af5fa7521f2-additional-scripts\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.750059 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-run\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.750409 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j692p\" (UniqueName: \"kubernetes.io/projected/2471428c-2cd4-4f21-b70b-5af5fa7521f2-kube-api-access-j692p\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.852512 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-log-ovn\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.852568 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-run-ovn\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.852605 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2471428c-2cd4-4f21-b70b-5af5fa7521f2-scripts\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.852688 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2471428c-2cd4-4f21-b70b-5af5fa7521f2-additional-scripts\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.852729 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-run\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.852782 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j692p\" (UniqueName: \"kubernetes.io/projected/2471428c-2cd4-4f21-b70b-5af5fa7521f2-kube-api-access-j692p\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.852895 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-log-ovn\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.852916 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-run-ovn\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.853037 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-run\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.854137 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2471428c-2cd4-4f21-b70b-5af5fa7521f2-additional-scripts\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.855267 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2471428c-2cd4-4f21-b70b-5af5fa7521f2-scripts\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:24 crc kubenswrapper[5024]: I1128 17:21:24.871974 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j692p\" (UniqueName: \"kubernetes.io/projected/2471428c-2cd4-4f21-b70b-5af5fa7521f2-kube-api-access-j692p\") pod \"ovn-controller-gwmd4-config-6dsk2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.148221 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.518596 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b5ef-account-create-update-ft92t" Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.581502 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz" Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.632305 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849dfda2-83a5-47f6-aca7-f25ff8136829-operator-scripts\") pod \"849dfda2-83a5-47f6-aca7-f25ff8136829\" (UID: \"849dfda2-83a5-47f6-aca7-f25ff8136829\") " Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.632774 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjcbp\" (UniqueName: \"kubernetes.io/projected/849dfda2-83a5-47f6-aca7-f25ff8136829-kube-api-access-cjcbp\") pod \"849dfda2-83a5-47f6-aca7-f25ff8136829\" (UID: \"849dfda2-83a5-47f6-aca7-f25ff8136829\") " Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.633299 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/849dfda2-83a5-47f6-aca7-f25ff8136829-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "849dfda2-83a5-47f6-aca7-f25ff8136829" (UID: "849dfda2-83a5-47f6-aca7-f25ff8136829"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.633587 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849dfda2-83a5-47f6-aca7-f25ff8136829-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.639745 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849dfda2-83a5-47f6-aca7-f25ff8136829-kube-api-access-cjcbp" (OuterVolumeSpecName: "kube-api-access-cjcbp") pod "849dfda2-83a5-47f6-aca7-f25ff8136829" (UID: "849dfda2-83a5-47f6-aca7-f25ff8136829"). InnerVolumeSpecName "kube-api-access-cjcbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.735459 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p96gw\" (UniqueName: \"kubernetes.io/projected/faa0ae71-201a-464c-ad32-6fc693cf3e62-kube-api-access-p96gw\") pod \"faa0ae71-201a-464c-ad32-6fc693cf3e62\" (UID: \"faa0ae71-201a-464c-ad32-6fc693cf3e62\") " Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.735721 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa0ae71-201a-464c-ad32-6fc693cf3e62-operator-scripts\") pod \"faa0ae71-201a-464c-ad32-6fc693cf3e62\" (UID: \"faa0ae71-201a-464c-ad32-6fc693cf3e62\") " Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.736316 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faa0ae71-201a-464c-ad32-6fc693cf3e62-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "faa0ae71-201a-464c-ad32-6fc693cf3e62" (UID: "faa0ae71-201a-464c-ad32-6fc693cf3e62"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.736836 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faa0ae71-201a-464c-ad32-6fc693cf3e62-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.736851 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjcbp\" (UniqueName: \"kubernetes.io/projected/849dfda2-83a5-47f6-aca7-f25ff8136829-kube-api-access-cjcbp\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.740891 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa0ae71-201a-464c-ad32-6fc693cf3e62-kube-api-access-p96gw" (OuterVolumeSpecName: "kube-api-access-p96gw") pod "faa0ae71-201a-464c-ad32-6fc693cf3e62" (UID: "faa0ae71-201a-464c-ad32-6fc693cf3e62"). InnerVolumeSpecName "kube-api-access-p96gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.838814 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p96gw\" (UniqueName: \"kubernetes.io/projected/faa0ae71-201a-464c-ad32-6fc693cf3e62-kube-api-access-p96gw\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:25 crc kubenswrapper[5024]: W1128 17:21:25.838837 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2471428c_2cd4_4f21_b70b_5af5fa7521f2.slice/crio-1d7ec4ab2c11dddda985e421dbc5ef3f05f1d666073854ec3626ce31cc94e0f0 WatchSource:0}: Error finding container 1d7ec4ab2c11dddda985e421dbc5ef3f05f1d666073854ec3626ce31cc94e0f0: Status 404 returned error can't find the container with id 1d7ec4ab2c11dddda985e421dbc5ef3f05f1d666073854ec3626ce31cc94e0f0 Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.839166 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gwmd4-config-6dsk2"] Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.919112 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gwmd4-config-6dsk2" event={"ID":"2471428c-2cd4-4f21-b70b-5af5fa7521f2","Type":"ContainerStarted","Data":"1d7ec4ab2c11dddda985e421dbc5ef3f05f1d666073854ec3626ce31cc94e0f0"} Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.921565 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz" Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.921557 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz" event={"ID":"faa0ae71-201a-464c-ad32-6fc693cf3e62","Type":"ContainerDied","Data":"c5567d7ab8ce156bd5ea8a5e66060149244c54b349699baacc117100cd64fdd3"} Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.921844 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5567d7ab8ce156bd5ea8a5e66060149244c54b349699baacc117100cd64fdd3" Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.922815 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b5ef-account-create-update-ft92t" event={"ID":"849dfda2-83a5-47f6-aca7-f25ff8136829","Type":"ContainerDied","Data":"2a49382d11bffd5d4da0f602c106afc5c6d9a52c17b8893dd62eb012f9dc1291"} Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.922850 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a49382d11bffd5d4da0f602c106afc5c6d9a52c17b8893dd62eb012f9dc1291" Nov 28 17:21:25 crc kubenswrapper[5024]: I1128 17:21:25.922911 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b5ef-account-create-update-ft92t" Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.838790 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-ppx6b"] Nov 28 17:21:27 crc kubenswrapper[5024]: E1128 17:21:27.839533 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa0ae71-201a-464c-ad32-6fc693cf3e62" containerName="mariadb-database-create" Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.839548 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa0ae71-201a-464c-ad32-6fc693cf3e62" containerName="mariadb-database-create" Nov 28 17:21:27 crc kubenswrapper[5024]: E1128 17:21:27.839608 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849dfda2-83a5-47f6-aca7-f25ff8136829" containerName="mariadb-account-create-update" Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.839616 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="849dfda2-83a5-47f6-aca7-f25ff8136829" containerName="mariadb-account-create-update" Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.839833 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa0ae71-201a-464c-ad32-6fc693cf3e62" containerName="mariadb-database-create" Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.839866 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="849dfda2-83a5-47f6-aca7-f25ff8136829" containerName="mariadb-account-create-update" Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.840772 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ppx6b" Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.842666 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mcqcv" Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.843054 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.850831 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ppx6b"] Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.930837 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb99t\" (UniqueName: \"kubernetes.io/projected/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-kube-api-access-rb99t\") pod \"glance-db-sync-ppx6b\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " pod="openstack/glance-db-sync-ppx6b" Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.930891 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-combined-ca-bundle\") pod \"glance-db-sync-ppx6b\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " pod="openstack/glance-db-sync-ppx6b" Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.930941 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-db-sync-config-data\") pod \"glance-db-sync-ppx6b\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " pod="openstack/glance-db-sync-ppx6b" Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.931006 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-config-data\") pod \"glance-db-sync-ppx6b\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " pod="openstack/glance-db-sync-ppx6b" Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.969592 5024 generic.go:334] "Generic (PLEG): container finished" podID="2471428c-2cd4-4f21-b70b-5af5fa7521f2" containerID="9c802141beeee8c7aa00167fad6f387352bfda3be061fedca99b2b6ae02f1322" exitCode=0 Nov 28 17:21:27 crc kubenswrapper[5024]: I1128 17:21:27.969641 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gwmd4-config-6dsk2" event={"ID":"2471428c-2cd4-4f21-b70b-5af5fa7521f2","Type":"ContainerDied","Data":"9c802141beeee8c7aa00167fad6f387352bfda3be061fedca99b2b6ae02f1322"} Nov 28 17:21:28 crc kubenswrapper[5024]: I1128 17:21:28.033293 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-config-data\") pod \"glance-db-sync-ppx6b\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " pod="openstack/glance-db-sync-ppx6b" Nov 28 17:21:28 crc kubenswrapper[5024]: I1128 17:21:28.033708 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb99t\" (UniqueName: \"kubernetes.io/projected/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-kube-api-access-rb99t\") pod \"glance-db-sync-ppx6b\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " pod="openstack/glance-db-sync-ppx6b" Nov 28 17:21:28 crc kubenswrapper[5024]: I1128 17:21:28.033826 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-combined-ca-bundle\") pod \"glance-db-sync-ppx6b\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " pod="openstack/glance-db-sync-ppx6b" Nov 28 17:21:28 crc kubenswrapper[5024]: I1128 17:21:28.033962 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-db-sync-config-data\") pod \"glance-db-sync-ppx6b\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " pod="openstack/glance-db-sync-ppx6b" Nov 28 17:21:28 crc kubenswrapper[5024]: I1128 17:21:28.039708 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-db-sync-config-data\") pod \"glance-db-sync-ppx6b\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " pod="openstack/glance-db-sync-ppx6b" Nov 28 17:21:28 crc kubenswrapper[5024]: I1128 17:21:28.039864 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-combined-ca-bundle\") pod \"glance-db-sync-ppx6b\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " pod="openstack/glance-db-sync-ppx6b" Nov 28 17:21:28 crc kubenswrapper[5024]: I1128 17:21:28.040663 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-config-data\") pod \"glance-db-sync-ppx6b\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " pod="openstack/glance-db-sync-ppx6b" Nov 28 17:21:28 crc kubenswrapper[5024]: I1128 17:21:28.054243 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb99t\" (UniqueName: \"kubernetes.io/projected/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-kube-api-access-rb99t\") pod \"glance-db-sync-ppx6b\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " pod="openstack/glance-db-sync-ppx6b" Nov 28 17:21:28 crc kubenswrapper[5024]: I1128 17:21:28.156791 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ppx6b" Nov 28 17:21:28 crc kubenswrapper[5024]: I1128 17:21:28.716892 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ppx6b"] Nov 28 17:21:28 crc kubenswrapper[5024]: W1128 17:21:28.719630 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc17c2e08_eb13_4f5f_8ff2_91f1b91c6be6.slice/crio-a3f4b6a85420a8c604c8cc3ad76b468e7ef05f5b261219ae4dc4c0d771e469d4 WatchSource:0}: Error finding container a3f4b6a85420a8c604c8cc3ad76b468e7ef05f5b261219ae4dc4c0d771e469d4: Status 404 returned error can't find the container with id a3f4b6a85420a8c604c8cc3ad76b468e7ef05f5b261219ae4dc4c0d771e469d4 Nov 28 17:21:28 crc kubenswrapper[5024]: I1128 17:21:28.723151 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:21:28 crc kubenswrapper[5024]: I1128 17:21:28.979684 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ppx6b" event={"ID":"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6","Type":"ContainerStarted","Data":"a3f4b6a85420a8c604c8cc3ad76b468e7ef05f5b261219ae4dc4c0d771e469d4"} Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.193337 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gwmd4" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.454720 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.648643 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2471428c-2cd4-4f21-b70b-5af5fa7521f2-additional-scripts\") pod \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.648772 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-log-ovn\") pod \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.648819 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-run\") pod \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.648838 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-run-ovn\") pod \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.648880 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j692p\" (UniqueName: \"kubernetes.io/projected/2471428c-2cd4-4f21-b70b-5af5fa7521f2-kube-api-access-j692p\") pod \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.648932 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-run" (OuterVolumeSpecName: "var-run") pod "2471428c-2cd4-4f21-b70b-5af5fa7521f2" (UID: "2471428c-2cd4-4f21-b70b-5af5fa7521f2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.648932 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2471428c-2cd4-4f21-b70b-5af5fa7521f2" (UID: "2471428c-2cd4-4f21-b70b-5af5fa7521f2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.648989 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2471428c-2cd4-4f21-b70b-5af5fa7521f2-scripts\") pod \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.648996 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2471428c-2cd4-4f21-b70b-5af5fa7521f2" (UID: "2471428c-2cd4-4f21-b70b-5af5fa7521f2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.649564 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2471428c-2cd4-4f21-b70b-5af5fa7521f2-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2471428c-2cd4-4f21-b70b-5af5fa7521f2" (UID: "2471428c-2cd4-4f21-b70b-5af5fa7521f2"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.649745 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2471428c-2cd4-4f21-b70b-5af5fa7521f2-scripts" (OuterVolumeSpecName: "scripts") pod "2471428c-2cd4-4f21-b70b-5af5fa7521f2" (UID: "2471428c-2cd4-4f21-b70b-5af5fa7521f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.650269 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2471428c-2cd4-4f21-b70b-5af5fa7521f2-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.650291 5024 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2471428c-2cd4-4f21-b70b-5af5fa7521f2-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.650306 5024 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.650317 5024 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.650327 5024 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2471428c-2cd4-4f21-b70b-5af5fa7521f2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.750443 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2471428c-2cd4-4f21-b70b-5af5fa7521f2-kube-api-access-j692p" (OuterVolumeSpecName: "kube-api-access-j692p") pod "2471428c-2cd4-4f21-b70b-5af5fa7521f2" (UID: "2471428c-2cd4-4f21-b70b-5af5fa7521f2"). InnerVolumeSpecName "kube-api-access-j692p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.751318 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j692p\" (UniqueName: \"kubernetes.io/projected/2471428c-2cd4-4f21-b70b-5af5fa7521f2-kube-api-access-j692p\") pod \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\" (UID: \"2471428c-2cd4-4f21-b70b-5af5fa7521f2\") " Nov 28 17:21:29 crc kubenswrapper[5024]: W1128 17:21:29.751937 5024 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/2471428c-2cd4-4f21-b70b-5af5fa7521f2/volumes/kubernetes.io~projected/kube-api-access-j692p Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.751991 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2471428c-2cd4-4f21-b70b-5af5fa7521f2-kube-api-access-j692p" (OuterVolumeSpecName: "kube-api-access-j692p") pod "2471428c-2cd4-4f21-b70b-5af5fa7521f2" (UID: "2471428c-2cd4-4f21-b70b-5af5fa7521f2"). InnerVolumeSpecName "kube-api-access-j692p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.854403 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j692p\" (UniqueName: \"kubernetes.io/projected/2471428c-2cd4-4f21-b70b-5af5fa7521f2-kube-api-access-j692p\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.992597 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gwmd4-config-6dsk2" event={"ID":"2471428c-2cd4-4f21-b70b-5af5fa7521f2","Type":"ContainerDied","Data":"1d7ec4ab2c11dddda985e421dbc5ef3f05f1d666073854ec3626ce31cc94e0f0"} Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.993761 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d7ec4ab2c11dddda985e421dbc5ef3f05f1d666073854ec3626ce31cc94e0f0" Nov 28 17:21:29 crc kubenswrapper[5024]: I1128 17:21:29.992657 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gwmd4-config-6dsk2" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.582958 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gwmd4-config-6dsk2"] Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.595624 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gwmd4-config-6dsk2"] Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.674993 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gwmd4-config-22g7b"] Nov 28 17:21:30 crc kubenswrapper[5024]: E1128 17:21:30.675721 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2471428c-2cd4-4f21-b70b-5af5fa7521f2" containerName="ovn-config" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.675746 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2471428c-2cd4-4f21-b70b-5af5fa7521f2" containerName="ovn-config" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.676045 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2471428c-2cd4-4f21-b70b-5af5fa7521f2" containerName="ovn-config" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.677103 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.679370 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.683766 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gwmd4-config-22g7b"] Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.757825 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.760100 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.763386 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.777204 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.796502 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-log-ovn\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.796555 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e25b56-9b79-4a1c-ac2f-678b370669dd-additional-scripts\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.797244 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-run\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.798145 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7e25b56-9b79-4a1c-ac2f-678b370669dd-scripts\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.798230 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-run-ovn\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.798322 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nd7v\" (UniqueName: \"kubernetes.io/projected/c7e25b56-9b79-4a1c-ac2f-678b370669dd-kube-api-access-5nd7v\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.900164 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a2db16-e493-45bc-b0ab-7606965b1612-config-data\") pod \"mysqld-exporter-0\" (UID: \"47a2db16-e493-45bc-b0ab-7606965b1612\") " pod="openstack/mysqld-exporter-0" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.900273 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7e25b56-9b79-4a1c-ac2f-678b370669dd-scripts\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.900350 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a2db16-e493-45bc-b0ab-7606965b1612-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"47a2db16-e493-45bc-b0ab-7606965b1612\") " pod="openstack/mysqld-exporter-0" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.900398 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-run-ovn\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.900471 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nd7v\" (UniqueName: \"kubernetes.io/projected/c7e25b56-9b79-4a1c-ac2f-678b370669dd-kube-api-access-5nd7v\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.900553 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-log-ovn\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.900574 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e25b56-9b79-4a1c-ac2f-678b370669dd-additional-scripts\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.900629 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdlmm\" (UniqueName: \"kubernetes.io/projected/47a2db16-e493-45bc-b0ab-7606965b1612-kube-api-access-cdlmm\") pod \"mysqld-exporter-0\" (UID: \"47a2db16-e493-45bc-b0ab-7606965b1612\") " pod="openstack/mysqld-exporter-0" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.900654 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-run\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.900781 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-run\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.900793 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-run-ovn\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.901235 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-log-ovn\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.901483 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e25b56-9b79-4a1c-ac2f-678b370669dd-additional-scripts\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:30 crc kubenswrapper[5024]: I1128 17:21:30.902943 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7e25b56-9b79-4a1c-ac2f-678b370669dd-scripts\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:31 crc kubenswrapper[5024]: I1128 17:21:31.003903 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdlmm\" (UniqueName: \"kubernetes.io/projected/47a2db16-e493-45bc-b0ab-7606965b1612-kube-api-access-cdlmm\") pod \"mysqld-exporter-0\" (UID: \"47a2db16-e493-45bc-b0ab-7606965b1612\") " pod="openstack/mysqld-exporter-0" Nov 28 17:21:31 crc kubenswrapper[5024]: I1128 17:21:31.004676 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a2db16-e493-45bc-b0ab-7606965b1612-config-data\") pod \"mysqld-exporter-0\" (UID: \"47a2db16-e493-45bc-b0ab-7606965b1612\") " pod="openstack/mysqld-exporter-0" Nov 28 17:21:31 crc kubenswrapper[5024]: I1128 17:21:31.005314 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a2db16-e493-45bc-b0ab-7606965b1612-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"47a2db16-e493-45bc-b0ab-7606965b1612\") " pod="openstack/mysqld-exporter-0" Nov 28 17:21:31 crc kubenswrapper[5024]: I1128 17:21:31.087274 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nd7v\" (UniqueName: \"kubernetes.io/projected/c7e25b56-9b79-4a1c-ac2f-678b370669dd-kube-api-access-5nd7v\") pod \"ovn-controller-gwmd4-config-22g7b\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:31 crc kubenswrapper[5024]: I1128 17:21:31.090766 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a2db16-e493-45bc-b0ab-7606965b1612-config-data\") pod \"mysqld-exporter-0\" (UID: \"47a2db16-e493-45bc-b0ab-7606965b1612\") " pod="openstack/mysqld-exporter-0" Nov 28 17:21:31 crc kubenswrapper[5024]: I1128 17:21:31.092202 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a2db16-e493-45bc-b0ab-7606965b1612-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"47a2db16-e493-45bc-b0ab-7606965b1612\") " pod="openstack/mysqld-exporter-0" Nov 28 17:21:31 crc kubenswrapper[5024]: I1128 17:21:31.092695 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdlmm\" (UniqueName: \"kubernetes.io/projected/47a2db16-e493-45bc-b0ab-7606965b1612-kube-api-access-cdlmm\") pod \"mysqld-exporter-0\" (UID: \"47a2db16-e493-45bc-b0ab-7606965b1612\") " pod="openstack/mysqld-exporter-0" Nov 28 17:21:31 crc kubenswrapper[5024]: I1128 17:21:31.296275 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:31 crc kubenswrapper[5024]: I1128 17:21:31.386459 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 28 17:21:31 crc kubenswrapper[5024]: I1128 17:21:31.853382 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gwmd4-config-22g7b"] Nov 28 17:21:32 crc kubenswrapper[5024]: I1128 17:21:32.071323 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a8a5d6d-4404-4848-a8b9-d47cee1e350d","Type":"ContainerStarted","Data":"1a8d14a1d59e13c8a36e1679d66c11a5f7760f922d105ae85d2a4091202a5931"} Nov 28 17:21:32 crc kubenswrapper[5024]: I1128 17:21:32.073002 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 17:21:32 crc kubenswrapper[5024]: W1128 17:21:32.073346 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47a2db16_e493_45bc_b0ab_7606965b1612.slice/crio-fa2fa941404488f36bc0e431d3f7a9a41ea014f56e88f1df38a184ec5746bba7 WatchSource:0}: Error finding container fa2fa941404488f36bc0e431d3f7a9a41ea014f56e88f1df38a184ec5746bba7: Status 404 returned error can't find the container with id fa2fa941404488f36bc0e431d3f7a9a41ea014f56e88f1df38a184ec5746bba7 Nov 28 17:21:32 crc kubenswrapper[5024]: I1128 17:21:32.075541 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gwmd4-config-22g7b" event={"ID":"c7e25b56-9b79-4a1c-ac2f-678b370669dd","Type":"ContainerStarted","Data":"2d4c294ce6fa5162d361aea4e32463706f36771643709481037b19d61189203d"} Nov 28 17:21:32 crc kubenswrapper[5024]: I1128 17:21:32.516098 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2471428c-2cd4-4f21-b70b-5af5fa7521f2" path="/var/lib/kubelet/pods/2471428c-2cd4-4f21-b70b-5af5fa7521f2/volumes" Nov 28 17:21:32 crc kubenswrapper[5024]: I1128 17:21:32.669963 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:32 crc kubenswrapper[5024]: I1128 17:21:32.677046 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa2554f8-7d4e-425d-a74a-3322dc09d7ed-etc-swift\") pod \"swift-storage-0\" (UID: \"aa2554f8-7d4e-425d-a74a-3322dc09d7ed\") " pod="openstack/swift-storage-0" Nov 28 17:21:32 crc kubenswrapper[5024]: I1128 17:21:32.895270 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 17:21:33 crc kubenswrapper[5024]: I1128 17:21:33.120589 5024 generic.go:334] "Generic (PLEG): container finished" podID="c7e25b56-9b79-4a1c-ac2f-678b370669dd" containerID="ffcd751d53cca8b8d9f971963f6fa36719c4c67e8f0760d606fb4add08d13c45" exitCode=0 Nov 28 17:21:33 crc kubenswrapper[5024]: I1128 17:21:33.121121 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gwmd4-config-22g7b" event={"ID":"c7e25b56-9b79-4a1c-ac2f-678b370669dd","Type":"ContainerDied","Data":"ffcd751d53cca8b8d9f971963f6fa36719c4c67e8f0760d606fb4add08d13c45"} Nov 28 17:21:33 crc kubenswrapper[5024]: I1128 17:21:33.134445 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"47a2db16-e493-45bc-b0ab-7606965b1612","Type":"ContainerStarted","Data":"fa2fa941404488f36bc0e431d3f7a9a41ea014f56e88f1df38a184ec5746bba7"} Nov 28 17:21:33 crc kubenswrapper[5024]: I1128 17:21:33.646293 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 17:21:33 crc kubenswrapper[5024]: I1128 17:21:33.666294 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 17:21:33 crc kubenswrapper[5024]: I1128 17:21:33.998291 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.092915 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-5tcbk"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.094979 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5tcbk" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.118142 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5tcbk"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.238908 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5cdj\" (UniqueName: \"kubernetes.io/projected/c97879e5-b703-4517-bdef-ff788259266f-kube-api-access-r5cdj\") pod \"heat-db-create-5tcbk\" (UID: \"c97879e5-b703-4517-bdef-ff788259266f\") " pod="openstack/heat-db-create-5tcbk" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.239924 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c97879e5-b703-4517-bdef-ff788259266f-operator-scripts\") pod \"heat-db-create-5tcbk\" (UID: \"c97879e5-b703-4517-bdef-ff788259266f\") " pod="openstack/heat-db-create-5tcbk" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.298263 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-2117-account-create-update-zwt9d"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.301764 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2117-account-create-update-zwt9d" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.305340 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.308123 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-2117-account-create-update-zwt9d"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.343927 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5cdj\" (UniqueName: \"kubernetes.io/projected/c97879e5-b703-4517-bdef-ff788259266f-kube-api-access-r5cdj\") pod \"heat-db-create-5tcbk\" (UID: \"c97879e5-b703-4517-bdef-ff788259266f\") " pod="openstack/heat-db-create-5tcbk" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.343971 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c97879e5-b703-4517-bdef-ff788259266f-operator-scripts\") pod \"heat-db-create-5tcbk\" (UID: \"c97879e5-b703-4517-bdef-ff788259266f\") " pod="openstack/heat-db-create-5tcbk" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.350030 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c97879e5-b703-4517-bdef-ff788259266f-operator-scripts\") pod \"heat-db-create-5tcbk\" (UID: \"c97879e5-b703-4517-bdef-ff788259266f\") " pod="openstack/heat-db-create-5tcbk" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.375517 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7056-account-create-update-fh7lw"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.378517 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7056-account-create-update-fh7lw" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.383691 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.429562 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5cdj\" (UniqueName: \"kubernetes.io/projected/c97879e5-b703-4517-bdef-ff788259266f-kube-api-access-r5cdj\") pod \"heat-db-create-5tcbk\" (UID: \"c97879e5-b703-4517-bdef-ff788259266f\") " pod="openstack/heat-db-create-5tcbk" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.445362 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5tcbk" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.457583 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bkrq\" (UniqueName: \"kubernetes.io/projected/6cd4b169-ce4b-4b45-969a-7f73011edf61-kube-api-access-2bkrq\") pod \"heat-2117-account-create-update-zwt9d\" (UID: \"6cd4b169-ce4b-4b45-969a-7f73011edf61\") " pod="openstack/heat-2117-account-create-update-zwt9d" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.458276 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd4b169-ce4b-4b45-969a-7f73011edf61-operator-scripts\") pod \"heat-2117-account-create-update-zwt9d\" (UID: \"6cd4b169-ce4b-4b45-969a-7f73011edf61\") " pod="openstack/heat-2117-account-create-update-zwt9d" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.470093 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7056-account-create-update-fh7lw"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.549927 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-jmt7n"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.551270 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmt7n" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.562799 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb03410d-e1f0-4036-81fc-76f81bf76340-operator-scripts\") pod \"cinder-7056-account-create-update-fh7lw\" (UID: \"fb03410d-e1f0-4036-81fc-76f81bf76340\") " pod="openstack/cinder-7056-account-create-update-fh7lw" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.563127 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd4b169-ce4b-4b45-969a-7f73011edf61-operator-scripts\") pod \"heat-2117-account-create-update-zwt9d\" (UID: \"6cd4b169-ce4b-4b45-969a-7f73011edf61\") " pod="openstack/heat-2117-account-create-update-zwt9d" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.563198 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g277p\" (UniqueName: \"kubernetes.io/projected/fb03410d-e1f0-4036-81fc-76f81bf76340-kube-api-access-g277p\") pod \"cinder-7056-account-create-update-fh7lw\" (UID: \"fb03410d-e1f0-4036-81fc-76f81bf76340\") " pod="openstack/cinder-7056-account-create-update-fh7lw" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.563260 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bkrq\" (UniqueName: \"kubernetes.io/projected/6cd4b169-ce4b-4b45-969a-7f73011edf61-kube-api-access-2bkrq\") pod \"heat-2117-account-create-update-zwt9d\" (UID: \"6cd4b169-ce4b-4b45-969a-7f73011edf61\") " pod="openstack/heat-2117-account-create-update-zwt9d" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.564524 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd4b169-ce4b-4b45-969a-7f73011edf61-operator-scripts\") pod \"heat-2117-account-create-update-zwt9d\" (UID: \"6cd4b169-ce4b-4b45-969a-7f73011edf61\") " pod="openstack/heat-2117-account-create-update-zwt9d" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.581184 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jmt7n"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.593809 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-d45mr"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.597228 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d45mr" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.599649 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bkrq\" (UniqueName: \"kubernetes.io/projected/6cd4b169-ce4b-4b45-969a-7f73011edf61-kube-api-access-2bkrq\") pod \"heat-2117-account-create-update-zwt9d\" (UID: \"6cd4b169-ce4b-4b45-969a-7f73011edf61\") " pod="openstack/heat-2117-account-create-update-zwt9d" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.604121 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d45mr"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.638091 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2117-account-create-update-zwt9d" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.665300 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g277p\" (UniqueName: \"kubernetes.io/projected/fb03410d-e1f0-4036-81fc-76f81bf76340-kube-api-access-g277p\") pod \"cinder-7056-account-create-update-fh7lw\" (UID: \"fb03410d-e1f0-4036-81fc-76f81bf76340\") " pod="openstack/cinder-7056-account-create-update-fh7lw" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.665670 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n7c9\" (UniqueName: \"kubernetes.io/projected/547243af-e537-4990-ba48-b668f5a87bb7-kube-api-access-5n7c9\") pod \"cinder-db-create-jmt7n\" (UID: \"547243af-e537-4990-ba48-b668f5a87bb7\") " pod="openstack/cinder-db-create-jmt7n" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.665756 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb03410d-e1f0-4036-81fc-76f81bf76340-operator-scripts\") pod \"cinder-7056-account-create-update-fh7lw\" (UID: \"fb03410d-e1f0-4036-81fc-76f81bf76340\") " pod="openstack/cinder-7056-account-create-update-fh7lw" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.665855 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547243af-e537-4990-ba48-b668f5a87bb7-operator-scripts\") pod \"cinder-db-create-jmt7n\" (UID: \"547243af-e537-4990-ba48-b668f5a87bb7\") " pod="openstack/cinder-db-create-jmt7n" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.667178 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb03410d-e1f0-4036-81fc-76f81bf76340-operator-scripts\") pod \"cinder-7056-account-create-update-fh7lw\" (UID: \"fb03410d-e1f0-4036-81fc-76f81bf76340\") " pod="openstack/cinder-7056-account-create-update-fh7lw" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.698966 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g277p\" (UniqueName: \"kubernetes.io/projected/fb03410d-e1f0-4036-81fc-76f81bf76340-kube-api-access-g277p\") pod \"cinder-7056-account-create-update-fh7lw\" (UID: \"fb03410d-e1f0-4036-81fc-76f81bf76340\") " pod="openstack/cinder-7056-account-create-update-fh7lw" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.706727 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7c64-account-create-update-67zlr"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.708701 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7c64-account-create-update-67zlr" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.711578 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.737704 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7c64-account-create-update-67zlr"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.770401 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eff2673-6be4-4fe9-b36d-c7ab184b1a14-operator-scripts\") pod \"barbican-db-create-d45mr\" (UID: \"9eff2673-6be4-4fe9-b36d-c7ab184b1a14\") " pod="openstack/barbican-db-create-d45mr" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.770466 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n7c9\" (UniqueName: \"kubernetes.io/projected/547243af-e537-4990-ba48-b668f5a87bb7-kube-api-access-5n7c9\") pod \"cinder-db-create-jmt7n\" (UID: \"547243af-e537-4990-ba48-b668f5a87bb7\") " pod="openstack/cinder-db-create-jmt7n" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.770499 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547243af-e537-4990-ba48-b668f5a87bb7-operator-scripts\") pod \"cinder-db-create-jmt7n\" (UID: \"547243af-e537-4990-ba48-b668f5a87bb7\") " pod="openstack/cinder-db-create-jmt7n" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.770530 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj4kf\" (UniqueName: \"kubernetes.io/projected/9eff2673-6be4-4fe9-b36d-c7ab184b1a14-kube-api-access-hj4kf\") pod \"barbican-db-create-d45mr\" (UID: \"9eff2673-6be4-4fe9-b36d-c7ab184b1a14\") " pod="openstack/barbican-db-create-d45mr" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.771554 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547243af-e537-4990-ba48-b668f5a87bb7-operator-scripts\") pod \"cinder-db-create-jmt7n\" (UID: \"547243af-e537-4990-ba48-b668f5a87bb7\") " pod="openstack/cinder-db-create-jmt7n" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.794845 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7056-account-create-update-fh7lw" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.803823 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n7c9\" (UniqueName: \"kubernetes.io/projected/547243af-e537-4990-ba48-b668f5a87bb7-kube-api-access-5n7c9\") pod \"cinder-db-create-jmt7n\" (UID: \"547243af-e537-4990-ba48-b668f5a87bb7\") " pod="openstack/cinder-db-create-jmt7n" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.818294 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4t4sf"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.819829 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4t4sf" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.823934 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7sbwz" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.824196 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.824325 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.824433 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.836554 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4t4sf"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.858809 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0066-account-create-update-swplb"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.873121 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0066-account-create-update-swplb" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.874398 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt2w2\" (UniqueName: \"kubernetes.io/projected/e9fa01bb-f5e1-437f-b417-f201ad7b2fad-kube-api-access-rt2w2\") pod \"barbican-7c64-account-create-update-67zlr\" (UID: \"e9fa01bb-f5e1-437f-b417-f201ad7b2fad\") " pod="openstack/barbican-7c64-account-create-update-67zlr" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.874448 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eff2673-6be4-4fe9-b36d-c7ab184b1a14-operator-scripts\") pod \"barbican-db-create-d45mr\" (UID: \"9eff2673-6be4-4fe9-b36d-c7ab184b1a14\") " pod="openstack/barbican-db-create-d45mr" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.874536 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj4kf\" (UniqueName: \"kubernetes.io/projected/9eff2673-6be4-4fe9-b36d-c7ab184b1a14-kube-api-access-hj4kf\") pod \"barbican-db-create-d45mr\" (UID: \"9eff2673-6be4-4fe9-b36d-c7ab184b1a14\") " pod="openstack/barbican-db-create-d45mr" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.874709 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9fa01bb-f5e1-437f-b417-f201ad7b2fad-operator-scripts\") pod \"barbican-7c64-account-create-update-67zlr\" (UID: \"e9fa01bb-f5e1-437f-b417-f201ad7b2fad\") " pod="openstack/barbican-7c64-account-create-update-67zlr" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.875723 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eff2673-6be4-4fe9-b36d-c7ab184b1a14-operator-scripts\") pod \"barbican-db-create-d45mr\" (UID: \"9eff2673-6be4-4fe9-b36d-c7ab184b1a14\") " pod="openstack/barbican-db-create-d45mr" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.881046 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.881480 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0066-account-create-update-swplb"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.883398 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmt7n" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.974788 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj4kf\" (UniqueName: \"kubernetes.io/projected/9eff2673-6be4-4fe9-b36d-c7ab184b1a14-kube-api-access-hj4kf\") pod \"barbican-db-create-d45mr\" (UID: \"9eff2673-6be4-4fe9-b36d-c7ab184b1a14\") " pod="openstack/barbican-db-create-d45mr" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.981305 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-b99cs"] Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.986059 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b99cs" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.987907 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9848c031-a7cb-4f3e-804b-1142d6ddf3a4-operator-scripts\") pod \"neutron-0066-account-create-update-swplb\" (UID: \"9848c031-a7cb-4f3e-804b-1142d6ddf3a4\") " pod="openstack/neutron-0066-account-create-update-swplb" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.988030 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhqd5\" (UniqueName: \"kubernetes.io/projected/9848c031-a7cb-4f3e-804b-1142d6ddf3a4-kube-api-access-vhqd5\") pod \"neutron-0066-account-create-update-swplb\" (UID: \"9848c031-a7cb-4f3e-804b-1142d6ddf3a4\") " pod="openstack/neutron-0066-account-create-update-swplb" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.988068 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxtrf\" (UniqueName: \"kubernetes.io/projected/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-kube-api-access-mxtrf\") pod \"keystone-db-sync-4t4sf\" (UID: \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\") " pod="openstack/keystone-db-sync-4t4sf" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.988128 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9fa01bb-f5e1-437f-b417-f201ad7b2fad-operator-scripts\") pod \"barbican-7c64-account-create-update-67zlr\" (UID: \"e9fa01bb-f5e1-437f-b417-f201ad7b2fad\") " pod="openstack/barbican-7c64-account-create-update-67zlr" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.988455 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d45mr" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.989600 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9fa01bb-f5e1-437f-b417-f201ad7b2fad-operator-scripts\") pod \"barbican-7c64-account-create-update-67zlr\" (UID: \"e9fa01bb-f5e1-437f-b417-f201ad7b2fad\") " pod="openstack/barbican-7c64-account-create-update-67zlr" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.989754 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-combined-ca-bundle\") pod \"keystone-db-sync-4t4sf\" (UID: \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\") " pod="openstack/keystone-db-sync-4t4sf" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.990034 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-config-data\") pod \"keystone-db-sync-4t4sf\" (UID: \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\") " pod="openstack/keystone-db-sync-4t4sf" Nov 28 17:21:34 crc kubenswrapper[5024]: I1128 17:21:34.990093 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt2w2\" (UniqueName: \"kubernetes.io/projected/e9fa01bb-f5e1-437f-b417-f201ad7b2fad-kube-api-access-rt2w2\") pod \"barbican-7c64-account-create-update-67zlr\" (UID: \"e9fa01bb-f5e1-437f-b417-f201ad7b2fad\") " pod="openstack/barbican-7c64-account-create-update-67zlr" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.019168 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-b99cs"] Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.034796 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt2w2\" (UniqueName: \"kubernetes.io/projected/e9fa01bb-f5e1-437f-b417-f201ad7b2fad-kube-api-access-rt2w2\") pod \"barbican-7c64-account-create-update-67zlr\" (UID: \"e9fa01bb-f5e1-437f-b417-f201ad7b2fad\") " pod="openstack/barbican-7c64-account-create-update-67zlr" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.099631 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9848c031-a7cb-4f3e-804b-1142d6ddf3a4-operator-scripts\") pod \"neutron-0066-account-create-update-swplb\" (UID: \"9848c031-a7cb-4f3e-804b-1142d6ddf3a4\") " pod="openstack/neutron-0066-account-create-update-swplb" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.099711 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhqd5\" (UniqueName: \"kubernetes.io/projected/9848c031-a7cb-4f3e-804b-1142d6ddf3a4-kube-api-access-vhqd5\") pod \"neutron-0066-account-create-update-swplb\" (UID: \"9848c031-a7cb-4f3e-804b-1142d6ddf3a4\") " pod="openstack/neutron-0066-account-create-update-swplb" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.099733 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxtrf\" (UniqueName: \"kubernetes.io/projected/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-kube-api-access-mxtrf\") pod \"keystone-db-sync-4t4sf\" (UID: \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\") " pod="openstack/keystone-db-sync-4t4sf" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.099791 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-combined-ca-bundle\") pod \"keystone-db-sync-4t4sf\" (UID: \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\") " pod="openstack/keystone-db-sync-4t4sf" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.099924 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q98x\" (UniqueName: \"kubernetes.io/projected/770b6c25-63f4-4690-9a2e-b64f74e86272-kube-api-access-8q98x\") pod \"neutron-db-create-b99cs\" (UID: \"770b6c25-63f4-4690-9a2e-b64f74e86272\") " pod="openstack/neutron-db-create-b99cs" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.099956 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-config-data\") pod \"keystone-db-sync-4t4sf\" (UID: \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\") " pod="openstack/keystone-db-sync-4t4sf" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.100107 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/770b6c25-63f4-4690-9a2e-b64f74e86272-operator-scripts\") pod \"neutron-db-create-b99cs\" (UID: \"770b6c25-63f4-4690-9a2e-b64f74e86272\") " pod="openstack/neutron-db-create-b99cs" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.100381 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9848c031-a7cb-4f3e-804b-1142d6ddf3a4-operator-scripts\") pod \"neutron-0066-account-create-update-swplb\" (UID: \"9848c031-a7cb-4f3e-804b-1142d6ddf3a4\") " pod="openstack/neutron-0066-account-create-update-swplb" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.103380 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-combined-ca-bundle\") pod \"keystone-db-sync-4t4sf\" (UID: \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\") " pod="openstack/keystone-db-sync-4t4sf" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.105332 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-config-data\") pod \"keystone-db-sync-4t4sf\" (UID: \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\") " pod="openstack/keystone-db-sync-4t4sf" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.120378 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhqd5\" (UniqueName: \"kubernetes.io/projected/9848c031-a7cb-4f3e-804b-1142d6ddf3a4-kube-api-access-vhqd5\") pod \"neutron-0066-account-create-update-swplb\" (UID: \"9848c031-a7cb-4f3e-804b-1142d6ddf3a4\") " pod="openstack/neutron-0066-account-create-update-swplb" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.121108 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxtrf\" (UniqueName: \"kubernetes.io/projected/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-kube-api-access-mxtrf\") pod \"keystone-db-sync-4t4sf\" (UID: \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\") " pod="openstack/keystone-db-sync-4t4sf" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.167315 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7c64-account-create-update-67zlr" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.194962 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4t4sf" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.202080 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q98x\" (UniqueName: \"kubernetes.io/projected/770b6c25-63f4-4690-9a2e-b64f74e86272-kube-api-access-8q98x\") pod \"neutron-db-create-b99cs\" (UID: \"770b6c25-63f4-4690-9a2e-b64f74e86272\") " pod="openstack/neutron-db-create-b99cs" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.202209 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/770b6c25-63f4-4690-9a2e-b64f74e86272-operator-scripts\") pod \"neutron-db-create-b99cs\" (UID: \"770b6c25-63f4-4690-9a2e-b64f74e86272\") " pod="openstack/neutron-db-create-b99cs" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.203076 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/770b6c25-63f4-4690-9a2e-b64f74e86272-operator-scripts\") pod \"neutron-db-create-b99cs\" (UID: \"770b6c25-63f4-4690-9a2e-b64f74e86272\") " pod="openstack/neutron-db-create-b99cs" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.218562 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q98x\" (UniqueName: \"kubernetes.io/projected/770b6c25-63f4-4690-9a2e-b64f74e86272-kube-api-access-8q98x\") pod \"neutron-db-create-b99cs\" (UID: \"770b6c25-63f4-4690-9a2e-b64f74e86272\") " pod="openstack/neutron-db-create-b99cs" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.247974 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0066-account-create-update-swplb" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.251262 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.320562 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b99cs" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.406420 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7e25b56-9b79-4a1c-ac2f-678b370669dd-scripts\") pod \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.406659 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nd7v\" (UniqueName: \"kubernetes.io/projected/c7e25b56-9b79-4a1c-ac2f-678b370669dd-kube-api-access-5nd7v\") pod \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.406707 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-run\") pod \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.406731 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-log-ovn\") pod \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.406746 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e25b56-9b79-4a1c-ac2f-678b370669dd-additional-scripts\") pod \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.406776 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-run-ovn\") pod \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\" (UID: \"c7e25b56-9b79-4a1c-ac2f-678b370669dd\") " Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.407351 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c7e25b56-9b79-4a1c-ac2f-678b370669dd" (UID: "c7e25b56-9b79-4a1c-ac2f-678b370669dd"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.407358 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c7e25b56-9b79-4a1c-ac2f-678b370669dd" (UID: "c7e25b56-9b79-4a1c-ac2f-678b370669dd"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.407421 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-run" (OuterVolumeSpecName: "var-run") pod "c7e25b56-9b79-4a1c-ac2f-678b370669dd" (UID: "c7e25b56-9b79-4a1c-ac2f-678b370669dd"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.408549 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7e25b56-9b79-4a1c-ac2f-678b370669dd-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c7e25b56-9b79-4a1c-ac2f-678b370669dd" (UID: "c7e25b56-9b79-4a1c-ac2f-678b370669dd"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.408899 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7e25b56-9b79-4a1c-ac2f-678b370669dd-scripts" (OuterVolumeSpecName: "scripts") pod "c7e25b56-9b79-4a1c-ac2f-678b370669dd" (UID: "c7e25b56-9b79-4a1c-ac2f-678b370669dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.410678 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e25b56-9b79-4a1c-ac2f-678b370669dd-kube-api-access-5nd7v" (OuterVolumeSpecName: "kube-api-access-5nd7v") pod "c7e25b56-9b79-4a1c-ac2f-678b370669dd" (UID: "c7e25b56-9b79-4a1c-ac2f-678b370669dd"). InnerVolumeSpecName "kube-api-access-5nd7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.512846 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nd7v\" (UniqueName: \"kubernetes.io/projected/c7e25b56-9b79-4a1c-ac2f-678b370669dd-kube-api-access-5nd7v\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.513131 5024 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.513165 5024 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.513178 5024 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c7e25b56-9b79-4a1c-ac2f-678b370669dd-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.513189 5024 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c7e25b56-9b79-4a1c-ac2f-678b370669dd-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:35 crc kubenswrapper[5024]: I1128 17:21:35.513200 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7e25b56-9b79-4a1c-ac2f-678b370669dd-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:36 crc kubenswrapper[5024]: I1128 17:21:36.156968 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jmt7n"] Nov 28 17:21:36 crc kubenswrapper[5024]: I1128 17:21:36.197467 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gwmd4-config-22g7b" Nov 28 17:21:36 crc kubenswrapper[5024]: I1128 17:21:36.197913 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gwmd4-config-22g7b" event={"ID":"c7e25b56-9b79-4a1c-ac2f-678b370669dd","Type":"ContainerDied","Data":"2d4c294ce6fa5162d361aea4e32463706f36771643709481037b19d61189203d"} Nov 28 17:21:36 crc kubenswrapper[5024]: I1128 17:21:36.197962 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d4c294ce6fa5162d361aea4e32463706f36771643709481037b19d61189203d" Nov 28 17:21:36 crc kubenswrapper[5024]: I1128 17:21:36.205614 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"8447eedd99095555c4bc381da71f4e56316cd5b6ccb2ecacd736f5804d33e95e"} Nov 28 17:21:36 crc kubenswrapper[5024]: W1128 17:21:36.298178 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod547243af_e537_4990_ba48_b668f5a87bb7.slice/crio-55c8a9ade3cd7bb95dffd088f1524e942833db79557bbda2e34481ea6b8b5533 WatchSource:0}: Error finding container 55c8a9ade3cd7bb95dffd088f1524e942833db79557bbda2e34481ea6b8b5533: Status 404 returned error can't find the container with id 55c8a9ade3cd7bb95dffd088f1524e942833db79557bbda2e34481ea6b8b5533 Nov 28 17:21:36 crc kubenswrapper[5024]: I1128 17:21:36.380754 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gwmd4-config-22g7b"] Nov 28 17:21:36 crc kubenswrapper[5024]: I1128 17:21:36.431357 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gwmd4-config-22g7b"] Nov 28 17:21:36 crc kubenswrapper[5024]: I1128 17:21:36.542900 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e25b56-9b79-4a1c-ac2f-678b370669dd" path="/var/lib/kubelet/pods/c7e25b56-9b79-4a1c-ac2f-678b370669dd/volumes" Nov 28 17:21:36 crc kubenswrapper[5024]: I1128 17:21:36.879376 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5tcbk"] Nov 28 17:21:36 crc kubenswrapper[5024]: W1128 17:21:36.883324 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc97879e5_b703_4517_bdef_ff788259266f.slice/crio-47e150a93477b53f93f1b0ef57881a8acf748160315043e10118a3749b735479 WatchSource:0}: Error finding container 47e150a93477b53f93f1b0ef57881a8acf748160315043e10118a3749b735479: Status 404 returned error can't find the container with id 47e150a93477b53f93f1b0ef57881a8acf748160315043e10118a3749b735479 Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.220996 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5tcbk" event={"ID":"c97879e5-b703-4517-bdef-ff788259266f","Type":"ContainerStarted","Data":"f0695fda48a06b1e114bf03cc4a5508e04945ec1d2529dad5d1148acefb511f0"} Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.221383 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5tcbk" event={"ID":"c97879e5-b703-4517-bdef-ff788259266f","Type":"ContainerStarted","Data":"47e150a93477b53f93f1b0ef57881a8acf748160315043e10118a3749b735479"} Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.247836 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"47a2db16-e493-45bc-b0ab-7606965b1612","Type":"ContainerStarted","Data":"cbd840d182f848c421656b0710596616878591dc5a5c9cd3541b49ea8670a7dc"} Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.259968 5024 generic.go:334] "Generic (PLEG): container finished" podID="547243af-e537-4990-ba48-b668f5a87bb7" containerID="3434faa4421d4c211f09a73519f9b0bcd034235c8ea66e5dafd52eefa8fe0443" exitCode=0 Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.260037 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jmt7n" event={"ID":"547243af-e537-4990-ba48-b668f5a87bb7","Type":"ContainerDied","Data":"3434faa4421d4c211f09a73519f9b0bcd034235c8ea66e5dafd52eefa8fe0443"} Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.260070 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jmt7n" event={"ID":"547243af-e537-4990-ba48-b668f5a87bb7","Type":"ContainerStarted","Data":"55c8a9ade3cd7bb95dffd088f1524e942833db79557bbda2e34481ea6b8b5533"} Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.263295 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7c64-account-create-update-67zlr"] Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.277805 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-5tcbk" podStartSLOduration=3.277763225 podStartE2EDuration="3.277763225s" podCreationTimestamp="2025-11-28 17:21:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:37.237755092 +0000 UTC m=+1399.286675997" watchObservedRunningTime="2025-11-28 17:21:37.277763225 +0000 UTC m=+1399.326684130" Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.310609 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4t4sf"] Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.339619 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-2117-account-create-update-zwt9d"] Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.355772 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=4.051609478 podStartE2EDuration="7.355735557s" podCreationTimestamp="2025-11-28 17:21:30 +0000 UTC" firstStartedPulling="2025-11-28 17:21:32.075548321 +0000 UTC m=+1394.124469226" lastFinishedPulling="2025-11-28 17:21:35.3796744 +0000 UTC m=+1397.428595305" observedRunningTime="2025-11-28 17:21:37.291496347 +0000 UTC m=+1399.340417252" watchObservedRunningTime="2025-11-28 17:21:37.355735557 +0000 UTC m=+1399.404656462" Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.390038 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d45mr"] Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.419069 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7056-account-create-update-fh7lw"] Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.431093 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-b99cs"] Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.439446 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0066-account-create-update-swplb"] Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.565544 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.565603 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.565654 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.567001 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c14bd832feb4db8425d0f1a45e06a6d0b13d8ee68a565113d9375a7e774e72b0"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:21:37 crc kubenswrapper[5024]: I1128 17:21:37.567075 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://c14bd832feb4db8425d0f1a45e06a6d0b13d8ee68a565113d9375a7e774e72b0" gracePeriod=600 Nov 28 17:21:37 crc kubenswrapper[5024]: W1128 17:21:37.739251 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec4831bb_4252_4d37_83f4_1b9e4f88ea35.slice/crio-c985875bfd21907de3616e447a62ba8f700ee17941725e201645899ab3f91238 WatchSource:0}: Error finding container c985875bfd21907de3616e447a62ba8f700ee17941725e201645899ab3f91238: Status 404 returned error can't find the container with id c985875bfd21907de3616e447a62ba8f700ee17941725e201645899ab3f91238 Nov 28 17:21:37 crc kubenswrapper[5024]: W1128 17:21:37.740688 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9fa01bb_f5e1_437f_b417_f201ad7b2fad.slice/crio-6bc8c17807276cd16e96fa31ad61d123c73b72c15d302cbec47dc0baf08e73b7 WatchSource:0}: Error finding container 6bc8c17807276cd16e96fa31ad61d123c73b72c15d302cbec47dc0baf08e73b7: Status 404 returned error can't find the container with id 6bc8c17807276cd16e96fa31ad61d123c73b72c15d302cbec47dc0baf08e73b7 Nov 28 17:21:37 crc kubenswrapper[5024]: W1128 17:21:37.749442 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cd4b169_ce4b_4b45_969a_7f73011edf61.slice/crio-6914fad50f5cf01371420c6470f370e29ab87d3e85b8c87d72ef175a7195f884 WatchSource:0}: Error finding container 6914fad50f5cf01371420c6470f370e29ab87d3e85b8c87d72ef175a7195f884: Status 404 returned error can't find the container with id 6914fad50f5cf01371420c6470f370e29ab87d3e85b8c87d72ef175a7195f884 Nov 28 17:21:37 crc kubenswrapper[5024]: W1128 17:21:37.755746 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod770b6c25_63f4_4690_9a2e_b64f74e86272.slice/crio-a79cc7c3f506eb9633611afbeccb69d224e3fb3e81d8463ac69b5abd89521e24 WatchSource:0}: Error finding container a79cc7c3f506eb9633611afbeccb69d224e3fb3e81d8463ac69b5abd89521e24: Status 404 returned error can't find the container with id a79cc7c3f506eb9633611afbeccb69d224e3fb3e81d8463ac69b5abd89521e24 Nov 28 17:21:37 crc kubenswrapper[5024]: W1128 17:21:37.769373 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb03410d_e1f0_4036_81fc_76f81bf76340.slice/crio-789c2446a7d44947338c8c25d3b03969bcfb64126edc34de0aaec08ebba47509 WatchSource:0}: Error finding container 789c2446a7d44947338c8c25d3b03969bcfb64126edc34de0aaec08ebba47509: Status 404 returned error can't find the container with id 789c2446a7d44947338c8c25d3b03969bcfb64126edc34de0aaec08ebba47509 Nov 28 17:21:38 crc kubenswrapper[5024]: E1128 17:21:38.183964 5024 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.141:43386->38.129.56.141:40169: write tcp 38.129.56.141:43386->38.129.56.141:40169: write: broken pipe Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.278254 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="c14bd832feb4db8425d0f1a45e06a6d0b13d8ee68a565113d9375a7e774e72b0" exitCode=0 Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.278439 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"c14bd832feb4db8425d0f1a45e06a6d0b13d8ee68a565113d9375a7e774e72b0"} Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.278632 5024 scope.go:117] "RemoveContainer" containerID="7d8f6a9c6d8434b82d8868ca2c29dd5353de86fc7a1c9949e65b4d17fd395785" Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.282271 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0066-account-create-update-swplb" event={"ID":"9848c031-a7cb-4f3e-804b-1142d6ddf3a4","Type":"ContainerStarted","Data":"24d4ef8b20ed1571f4c4b7cc5e3cd031b9e1266734cb1622662c125c385518a9"} Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.282612 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0066-account-create-update-swplb" event={"ID":"9848c031-a7cb-4f3e-804b-1142d6ddf3a4","Type":"ContainerStarted","Data":"251c9482a3f213314fa1b3926bf1e2db42a4a6a3e966d220f2cca22e25b36eb0"} Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.284929 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-2117-account-create-update-zwt9d" event={"ID":"6cd4b169-ce4b-4b45-969a-7f73011edf61","Type":"ContainerStarted","Data":"6914fad50f5cf01371420c6470f370e29ab87d3e85b8c87d72ef175a7195f884"} Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.287566 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7056-account-create-update-fh7lw" event={"ID":"fb03410d-e1f0-4036-81fc-76f81bf76340","Type":"ContainerStarted","Data":"789c2446a7d44947338c8c25d3b03969bcfb64126edc34de0aaec08ebba47509"} Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.291784 5024 generic.go:334] "Generic (PLEG): container finished" podID="c97879e5-b703-4517-bdef-ff788259266f" containerID="f0695fda48a06b1e114bf03cc4a5508e04945ec1d2529dad5d1148acefb511f0" exitCode=0 Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.291852 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5tcbk" event={"ID":"c97879e5-b703-4517-bdef-ff788259266f","Type":"ContainerDied","Data":"f0695fda48a06b1e114bf03cc4a5508e04945ec1d2529dad5d1148acefb511f0"} Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.298842 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b99cs" event={"ID":"770b6c25-63f4-4690-9a2e-b64f74e86272","Type":"ContainerStarted","Data":"8e84bcf72c6fdae9ebeaa642bd7bc9ce3b2433a6cbad8f0f71c3d2e53956de69"} Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.298916 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b99cs" event={"ID":"770b6c25-63f4-4690-9a2e-b64f74e86272","Type":"ContainerStarted","Data":"a79cc7c3f506eb9633611afbeccb69d224e3fb3e81d8463ac69b5abd89521e24"} Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.322339 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d45mr" event={"ID":"9eff2673-6be4-4fe9-b36d-c7ab184b1a14","Type":"ContainerStarted","Data":"c072755df2fc2175536171be9a4ad5431211d6b511eb010af8edfd2da62337a5"} Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.324283 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-0066-account-create-update-swplb" podStartSLOduration=4.324262867 podStartE2EDuration="4.324262867s" podCreationTimestamp="2025-11-28 17:21:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:38.312294712 +0000 UTC m=+1400.361215627" watchObservedRunningTime="2025-11-28 17:21:38.324262867 +0000 UTC m=+1400.373183772" Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.326200 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4t4sf" event={"ID":"ec4831bb-4252-4d37-83f4-1b9e4f88ea35","Type":"ContainerStarted","Data":"c985875bfd21907de3616e447a62ba8f700ee17941725e201645899ab3f91238"} Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.335325 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7c64-account-create-update-67zlr" event={"ID":"e9fa01bb-f5e1-437f-b417-f201ad7b2fad","Type":"ContainerStarted","Data":"6bc8c17807276cd16e96fa31ad61d123c73b72c15d302cbec47dc0baf08e73b7"} Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.380663 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-b99cs" podStartSLOduration=4.380639281 podStartE2EDuration="4.380639281s" podCreationTimestamp="2025-11-28 17:21:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:38.360726887 +0000 UTC m=+1400.409647792" watchObservedRunningTime="2025-11-28 17:21:38.380639281 +0000 UTC m=+1400.429560186" Nov 28 17:21:38 crc kubenswrapper[5024]: I1128 17:21:38.398389 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7056-account-create-update-fh7lw" podStartSLOduration=4.398369698 podStartE2EDuration="4.398369698s" podCreationTimestamp="2025-11-28 17:21:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:38.397427863 +0000 UTC m=+1400.446348768" watchObservedRunningTime="2025-11-28 17:21:38.398369698 +0000 UTC m=+1400.447290603" Nov 28 17:21:38 crc kubenswrapper[5024]: E1128 17:21:38.927526 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eff2673_6be4_4fe9_b36d_c7ab184b1a14.slice/crio-0df8007f5a4fe9d11a03d235b776517d8607eac5cab61c7251e6f626e1004d2f.scope\": RecentStats: unable to find data in memory cache]" Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.362528 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b"} Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.365564 5024 generic.go:334] "Generic (PLEG): container finished" podID="9848c031-a7cb-4f3e-804b-1142d6ddf3a4" containerID="24d4ef8b20ed1571f4c4b7cc5e3cd031b9e1266734cb1622662c125c385518a9" exitCode=0 Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.365955 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0066-account-create-update-swplb" event={"ID":"9848c031-a7cb-4f3e-804b-1142d6ddf3a4","Type":"ContainerDied","Data":"24d4ef8b20ed1571f4c4b7cc5e3cd031b9e1266734cb1622662c125c385518a9"} Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.369968 5024 generic.go:334] "Generic (PLEG): container finished" podID="9eff2673-6be4-4fe9-b36d-c7ab184b1a14" containerID="0df8007f5a4fe9d11a03d235b776517d8607eac5cab61c7251e6f626e1004d2f" exitCode=0 Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.370052 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d45mr" event={"ID":"9eff2673-6be4-4fe9-b36d-c7ab184b1a14","Type":"ContainerDied","Data":"0df8007f5a4fe9d11a03d235b776517d8607eac5cab61c7251e6f626e1004d2f"} Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.372658 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"8d648781342145d404b40db39ff87720ffec4f18e53986b3dd425de1ee745dc5"} Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.372686 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"71d2a4421ac6f1c1cdc1e7e9396e5c9e3192ba25f886e4eb351cdf5a13d47208"} Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.374514 5024 generic.go:334] "Generic (PLEG): container finished" podID="6cd4b169-ce4b-4b45-969a-7f73011edf61" containerID="bc0975f5b8227570e94c7468da21a82f6e09d4d10d05b2db1f712c49a8d72a6a" exitCode=0 Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.374558 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-2117-account-create-update-zwt9d" event={"ID":"6cd4b169-ce4b-4b45-969a-7f73011edf61","Type":"ContainerDied","Data":"bc0975f5b8227570e94c7468da21a82f6e09d4d10d05b2db1f712c49a8d72a6a"} Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.376311 5024 generic.go:334] "Generic (PLEG): container finished" podID="e9fa01bb-f5e1-437f-b417-f201ad7b2fad" containerID="539acfcc2901784a05afe01437521097fc8818d2525a233db707fbc55d1fb7a8" exitCode=0 Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.376352 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7c64-account-create-update-67zlr" event={"ID":"e9fa01bb-f5e1-437f-b417-f201ad7b2fad","Type":"ContainerDied","Data":"539acfcc2901784a05afe01437521097fc8818d2525a233db707fbc55d1fb7a8"} Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.378067 5024 generic.go:334] "Generic (PLEG): container finished" podID="fb03410d-e1f0-4036-81fc-76f81bf76340" containerID="b056391e03d2c5db04e4befcb87f553f9b99d042b4e628aa4aa932e9e1095dc2" exitCode=0 Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.378124 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7056-account-create-update-fh7lw" event={"ID":"fb03410d-e1f0-4036-81fc-76f81bf76340","Type":"ContainerDied","Data":"b056391e03d2c5db04e4befcb87f553f9b99d042b4e628aa4aa932e9e1095dc2"} Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.380436 5024 generic.go:334] "Generic (PLEG): container finished" podID="770b6c25-63f4-4690-9a2e-b64f74e86272" containerID="8e84bcf72c6fdae9ebeaa642bd7bc9ce3b2433a6cbad8f0f71c3d2e53956de69" exitCode=0 Nov 28 17:21:39 crc kubenswrapper[5024]: I1128 17:21:39.380595 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b99cs" event={"ID":"770b6c25-63f4-4690-9a2e-b64f74e86272","Type":"ContainerDied","Data":"8e84bcf72c6fdae9ebeaa642bd7bc9ce3b2433a6cbad8f0f71c3d2e53956de69"} Nov 28 17:21:41 crc kubenswrapper[5024]: I1128 17:21:41.936612 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2117-account-create-update-zwt9d" Nov 28 17:21:42 crc kubenswrapper[5024]: I1128 17:21:42.003786 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd4b169-ce4b-4b45-969a-7f73011edf61-operator-scripts\") pod \"6cd4b169-ce4b-4b45-969a-7f73011edf61\" (UID: \"6cd4b169-ce4b-4b45-969a-7f73011edf61\") " Nov 28 17:21:42 crc kubenswrapper[5024]: I1128 17:21:42.003837 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bkrq\" (UniqueName: \"kubernetes.io/projected/6cd4b169-ce4b-4b45-969a-7f73011edf61-kube-api-access-2bkrq\") pod \"6cd4b169-ce4b-4b45-969a-7f73011edf61\" (UID: \"6cd4b169-ce4b-4b45-969a-7f73011edf61\") " Nov 28 17:21:42 crc kubenswrapper[5024]: I1128 17:21:42.005097 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cd4b169-ce4b-4b45-969a-7f73011edf61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6cd4b169-ce4b-4b45-969a-7f73011edf61" (UID: "6cd4b169-ce4b-4b45-969a-7f73011edf61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:42 crc kubenswrapper[5024]: I1128 17:21:42.013640 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd4b169-ce4b-4b45-969a-7f73011edf61-kube-api-access-2bkrq" (OuterVolumeSpecName: "kube-api-access-2bkrq") pod "6cd4b169-ce4b-4b45-969a-7f73011edf61" (UID: "6cd4b169-ce4b-4b45-969a-7f73011edf61"). InnerVolumeSpecName "kube-api-access-2bkrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:42 crc kubenswrapper[5024]: I1128 17:21:42.107277 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd4b169-ce4b-4b45-969a-7f73011edf61-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:42 crc kubenswrapper[5024]: I1128 17:21:42.107338 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bkrq\" (UniqueName: \"kubernetes.io/projected/6cd4b169-ce4b-4b45-969a-7f73011edf61-kube-api-access-2bkrq\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:42 crc kubenswrapper[5024]: I1128 17:21:42.429759 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-2117-account-create-update-zwt9d" event={"ID":"6cd4b169-ce4b-4b45-969a-7f73011edf61","Type":"ContainerDied","Data":"6914fad50f5cf01371420c6470f370e29ab87d3e85b8c87d72ef175a7195f884"} Nov 28 17:21:42 crc kubenswrapper[5024]: I1128 17:21:42.429796 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-2117-account-create-update-zwt9d" Nov 28 17:21:42 crc kubenswrapper[5024]: I1128 17:21:42.429811 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6914fad50f5cf01371420c6470f370e29ab87d3e85b8c87d72ef175a7195f884" Nov 28 17:21:42 crc kubenswrapper[5024]: I1128 17:21:42.939690 5024 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod37deb816-c36f-47c7-9d3a-c7373eabeb1f"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod37deb816-c36f-47c7-9d3a-c7373eabeb1f] : Timed out while waiting for systemd to remove kubepods-besteffort-pod37deb816_c36f_47c7_9d3a_c7373eabeb1f.slice" Nov 28 17:21:42 crc kubenswrapper[5024]: E1128 17:21:42.940059 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod37deb816-c36f-47c7-9d3a-c7373eabeb1f] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod37deb816-c36f-47c7-9d3a-c7373eabeb1f] : Timed out while waiting for systemd to remove kubepods-besteffort-pod37deb816_c36f_47c7_9d3a_c7373eabeb1f.slice" pod="openstack/keystone-bb6d-account-create-update-4b8k6" podUID="37deb816-c36f-47c7-9d3a-c7373eabeb1f" Nov 28 17:21:42 crc kubenswrapper[5024]: I1128 17:21:42.942188 5024 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podd5b69e2a-d3f0-49f6-badd-92d6a30ba281"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podd5b69e2a-d3f0-49f6-badd-92d6a30ba281] : Timed out while waiting for systemd to remove kubepods-besteffort-podd5b69e2a_d3f0_49f6_badd_92d6a30ba281.slice" Nov 28 17:21:42 crc kubenswrapper[5024]: E1128 17:21:42.942301 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podd5b69e2a-d3f0-49f6-badd-92d6a30ba281] : unable to destroy cgroup paths for cgroup [kubepods besteffort podd5b69e2a-d3f0-49f6-badd-92d6a30ba281] : Timed out while waiting for systemd to remove kubepods-besteffort-podd5b69e2a_d3f0_49f6_badd_92d6a30ba281.slice" pod="openstack/keystone-db-create-8rngl" podUID="d5b69e2a-d3f0-49f6-badd-92d6a30ba281" Nov 28 17:21:43 crc kubenswrapper[5024]: I1128 17:21:43.439201 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb6d-account-create-update-4b8k6" Nov 28 17:21:43 crc kubenswrapper[5024]: I1128 17:21:43.440150 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8rngl" Nov 28 17:21:51 crc kubenswrapper[5024]: E1128 17:21:51.804744 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 28 17:21:51 crc kubenswrapper[5024]: E1128 17:21:51.805400 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rb99t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-ppx6b_openstack(c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:21:51 crc kubenswrapper[5024]: E1128 17:21:51.806972 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-ppx6b" podUID="c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.097784 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5tcbk" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.131619 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmt7n" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.142270 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7c64-account-create-update-67zlr" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.176963 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7056-account-create-update-fh7lw" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.204815 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0066-account-create-update-swplb" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.238630 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b99cs" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.239393 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb03410d-e1f0-4036-81fc-76f81bf76340-operator-scripts\") pod \"fb03410d-e1f0-4036-81fc-76f81bf76340\" (UID: \"fb03410d-e1f0-4036-81fc-76f81bf76340\") " Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.239470 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n7c9\" (UniqueName: \"kubernetes.io/projected/547243af-e537-4990-ba48-b668f5a87bb7-kube-api-access-5n7c9\") pod \"547243af-e537-4990-ba48-b668f5a87bb7\" (UID: \"547243af-e537-4990-ba48-b668f5a87bb7\") " Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.239532 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt2w2\" (UniqueName: \"kubernetes.io/projected/e9fa01bb-f5e1-437f-b417-f201ad7b2fad-kube-api-access-rt2w2\") pod \"e9fa01bb-f5e1-437f-b417-f201ad7b2fad\" (UID: \"e9fa01bb-f5e1-437f-b417-f201ad7b2fad\") " Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.239640 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547243af-e537-4990-ba48-b668f5a87bb7-operator-scripts\") pod \"547243af-e537-4990-ba48-b668f5a87bb7\" (UID: \"547243af-e537-4990-ba48-b668f5a87bb7\") " Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.239701 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9fa01bb-f5e1-437f-b417-f201ad7b2fad-operator-scripts\") pod \"e9fa01bb-f5e1-437f-b417-f201ad7b2fad\" (UID: \"e9fa01bb-f5e1-437f-b417-f201ad7b2fad\") " Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.239851 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5cdj\" (UniqueName: \"kubernetes.io/projected/c97879e5-b703-4517-bdef-ff788259266f-kube-api-access-r5cdj\") pod \"c97879e5-b703-4517-bdef-ff788259266f\" (UID: \"c97879e5-b703-4517-bdef-ff788259266f\") " Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.239956 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g277p\" (UniqueName: \"kubernetes.io/projected/fb03410d-e1f0-4036-81fc-76f81bf76340-kube-api-access-g277p\") pod \"fb03410d-e1f0-4036-81fc-76f81bf76340\" (UID: \"fb03410d-e1f0-4036-81fc-76f81bf76340\") " Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.239988 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c97879e5-b703-4517-bdef-ff788259266f-operator-scripts\") pod \"c97879e5-b703-4517-bdef-ff788259266f\" (UID: \"c97879e5-b703-4517-bdef-ff788259266f\") " Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.240327 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb03410d-e1f0-4036-81fc-76f81bf76340-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb03410d-e1f0-4036-81fc-76f81bf76340" (UID: "fb03410d-e1f0-4036-81fc-76f81bf76340"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.240882 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9fa01bb-f5e1-437f-b417-f201ad7b2fad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9fa01bb-f5e1-437f-b417-f201ad7b2fad" (UID: "e9fa01bb-f5e1-437f-b417-f201ad7b2fad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.240969 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c97879e5-b703-4517-bdef-ff788259266f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c97879e5-b703-4517-bdef-ff788259266f" (UID: "c97879e5-b703-4517-bdef-ff788259266f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.241257 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547243af-e537-4990-ba48-b668f5a87bb7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "547243af-e537-4990-ba48-b668f5a87bb7" (UID: "547243af-e537-4990-ba48-b668f5a87bb7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.241308 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9fa01bb-f5e1-437f-b417-f201ad7b2fad-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.241353 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb03410d-e1f0-4036-81fc-76f81bf76340-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.242670 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d45mr" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.247046 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/547243af-e537-4990-ba48-b668f5a87bb7-kube-api-access-5n7c9" (OuterVolumeSpecName: "kube-api-access-5n7c9") pod "547243af-e537-4990-ba48-b668f5a87bb7" (UID: "547243af-e537-4990-ba48-b668f5a87bb7"). InnerVolumeSpecName "kube-api-access-5n7c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.247172 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9fa01bb-f5e1-437f-b417-f201ad7b2fad-kube-api-access-rt2w2" (OuterVolumeSpecName: "kube-api-access-rt2w2") pod "e9fa01bb-f5e1-437f-b417-f201ad7b2fad" (UID: "e9fa01bb-f5e1-437f-b417-f201ad7b2fad"). InnerVolumeSpecName "kube-api-access-rt2w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.247312 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb03410d-e1f0-4036-81fc-76f81bf76340-kube-api-access-g277p" (OuterVolumeSpecName: "kube-api-access-g277p") pod "fb03410d-e1f0-4036-81fc-76f81bf76340" (UID: "fb03410d-e1f0-4036-81fc-76f81bf76340"). InnerVolumeSpecName "kube-api-access-g277p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.247641 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c97879e5-b703-4517-bdef-ff788259266f-kube-api-access-r5cdj" (OuterVolumeSpecName: "kube-api-access-r5cdj") pod "c97879e5-b703-4517-bdef-ff788259266f" (UID: "c97879e5-b703-4517-bdef-ff788259266f"). InnerVolumeSpecName "kube-api-access-r5cdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.342840 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eff2673-6be4-4fe9-b36d-c7ab184b1a14-operator-scripts\") pod \"9eff2673-6be4-4fe9-b36d-c7ab184b1a14\" (UID: \"9eff2673-6be4-4fe9-b36d-c7ab184b1a14\") " Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.342898 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9848c031-a7cb-4f3e-804b-1142d6ddf3a4-operator-scripts\") pod \"9848c031-a7cb-4f3e-804b-1142d6ddf3a4\" (UID: \"9848c031-a7cb-4f3e-804b-1142d6ddf3a4\") " Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.343007 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q98x\" (UniqueName: \"kubernetes.io/projected/770b6c25-63f4-4690-9a2e-b64f74e86272-kube-api-access-8q98x\") pod \"770b6c25-63f4-4690-9a2e-b64f74e86272\" (UID: \"770b6c25-63f4-4690-9a2e-b64f74e86272\") " Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.343086 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhqd5\" (UniqueName: \"kubernetes.io/projected/9848c031-a7cb-4f3e-804b-1142d6ddf3a4-kube-api-access-vhqd5\") pod \"9848c031-a7cb-4f3e-804b-1142d6ddf3a4\" (UID: \"9848c031-a7cb-4f3e-804b-1142d6ddf3a4\") " Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.343167 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/770b6c25-63f4-4690-9a2e-b64f74e86272-operator-scripts\") pod \"770b6c25-63f4-4690-9a2e-b64f74e86272\" (UID: \"770b6c25-63f4-4690-9a2e-b64f74e86272\") " Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.343219 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj4kf\" (UniqueName: \"kubernetes.io/projected/9eff2673-6be4-4fe9-b36d-c7ab184b1a14-kube-api-access-hj4kf\") pod \"9eff2673-6be4-4fe9-b36d-c7ab184b1a14\" (UID: \"9eff2673-6be4-4fe9-b36d-c7ab184b1a14\") " Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.343333 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eff2673-6be4-4fe9-b36d-c7ab184b1a14-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9eff2673-6be4-4fe9-b36d-c7ab184b1a14" (UID: "9eff2673-6be4-4fe9-b36d-c7ab184b1a14"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.343633 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5cdj\" (UniqueName: \"kubernetes.io/projected/c97879e5-b703-4517-bdef-ff788259266f-kube-api-access-r5cdj\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.343650 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g277p\" (UniqueName: \"kubernetes.io/projected/fb03410d-e1f0-4036-81fc-76f81bf76340-kube-api-access-g277p\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.343660 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c97879e5-b703-4517-bdef-ff788259266f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.343670 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n7c9\" (UniqueName: \"kubernetes.io/projected/547243af-e537-4990-ba48-b668f5a87bb7-kube-api-access-5n7c9\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.343678 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt2w2\" (UniqueName: \"kubernetes.io/projected/e9fa01bb-f5e1-437f-b417-f201ad7b2fad-kube-api-access-rt2w2\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.343687 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547243af-e537-4990-ba48-b668f5a87bb7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.343696 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eff2673-6be4-4fe9-b36d-c7ab184b1a14-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.343717 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/770b6c25-63f4-4690-9a2e-b64f74e86272-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "770b6c25-63f4-4690-9a2e-b64f74e86272" (UID: "770b6c25-63f4-4690-9a2e-b64f74e86272"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.344150 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9848c031-a7cb-4f3e-804b-1142d6ddf3a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9848c031-a7cb-4f3e-804b-1142d6ddf3a4" (UID: "9848c031-a7cb-4f3e-804b-1142d6ddf3a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.346738 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eff2673-6be4-4fe9-b36d-c7ab184b1a14-kube-api-access-hj4kf" (OuterVolumeSpecName: "kube-api-access-hj4kf") pod "9eff2673-6be4-4fe9-b36d-c7ab184b1a14" (UID: "9eff2673-6be4-4fe9-b36d-c7ab184b1a14"). InnerVolumeSpecName "kube-api-access-hj4kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.347599 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9848c031-a7cb-4f3e-804b-1142d6ddf3a4-kube-api-access-vhqd5" (OuterVolumeSpecName: "kube-api-access-vhqd5") pod "9848c031-a7cb-4f3e-804b-1142d6ddf3a4" (UID: "9848c031-a7cb-4f3e-804b-1142d6ddf3a4"). InnerVolumeSpecName "kube-api-access-vhqd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.355397 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/770b6c25-63f4-4690-9a2e-b64f74e86272-kube-api-access-8q98x" (OuterVolumeSpecName: "kube-api-access-8q98x") pod "770b6c25-63f4-4690-9a2e-b64f74e86272" (UID: "770b6c25-63f4-4690-9a2e-b64f74e86272"). InnerVolumeSpecName "kube-api-access-8q98x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.448521 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9848c031-a7cb-4f3e-804b-1142d6ddf3a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.448549 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q98x\" (UniqueName: \"kubernetes.io/projected/770b6c25-63f4-4690-9a2e-b64f74e86272-kube-api-access-8q98x\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.448562 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhqd5\" (UniqueName: \"kubernetes.io/projected/9848c031-a7cb-4f3e-804b-1142d6ddf3a4-kube-api-access-vhqd5\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.448571 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/770b6c25-63f4-4690-9a2e-b64f74e86272-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.448580 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj4kf\" (UniqueName: \"kubernetes.io/projected/9eff2673-6be4-4fe9-b36d-c7ab184b1a14-kube-api-access-hj4kf\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.571584 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a8a5d6d-4404-4848-a8b9-d47cee1e350d","Type":"ContainerStarted","Data":"245862d5ab5795b3c5ec4ec9a9edb68b77d53cfb13a489aef2a8bfa828a46942"} Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.584801 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d45mr" event={"ID":"9eff2673-6be4-4fe9-b36d-c7ab184b1a14","Type":"ContainerDied","Data":"c072755df2fc2175536171be9a4ad5431211d6b511eb010af8edfd2da62337a5"} Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.584848 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c072755df2fc2175536171be9a4ad5431211d6b511eb010af8edfd2da62337a5" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.584919 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d45mr" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.589387 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"8e03ccdf19bf087a43b986fd548339eb5cac6c3aae19299802262551cf62771b"} Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.589431 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"c9d5fa98e90f69e61e1cf71f090301c85c7f106f450f075955b5457f80634e58"} Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.594485 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7c64-account-create-update-67zlr" event={"ID":"e9fa01bb-f5e1-437f-b417-f201ad7b2fad","Type":"ContainerDied","Data":"6bc8c17807276cd16e96fa31ad61d123c73b72c15d302cbec47dc0baf08e73b7"} Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.594515 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bc8c17807276cd16e96fa31ad61d123c73b72c15d302cbec47dc0baf08e73b7" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.594568 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7c64-account-create-update-67zlr" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.605470 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7056-account-create-update-fh7lw" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.609134 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7056-account-create-update-fh7lw" event={"ID":"fb03410d-e1f0-4036-81fc-76f81bf76340","Type":"ContainerDied","Data":"789c2446a7d44947338c8c25d3b03969bcfb64126edc34de0aaec08ebba47509"} Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.609189 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="789c2446a7d44947338c8c25d3b03969bcfb64126edc34de0aaec08ebba47509" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.617999 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0066-account-create-update-swplb" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.621237 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0066-account-create-update-swplb" event={"ID":"9848c031-a7cb-4f3e-804b-1142d6ddf3a4","Type":"ContainerDied","Data":"251c9482a3f213314fa1b3926bf1e2db42a4a6a3e966d220f2cca22e25b36eb0"} Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.621558 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251c9482a3f213314fa1b3926bf1e2db42a4a6a3e966d220f2cca22e25b36eb0" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.626966 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jmt7n" event={"ID":"547243af-e537-4990-ba48-b668f5a87bb7","Type":"ContainerDied","Data":"55c8a9ade3cd7bb95dffd088f1524e942833db79557bbda2e34481ea6b8b5533"} Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.627013 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55c8a9ade3cd7bb95dffd088f1524e942833db79557bbda2e34481ea6b8b5533" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.627084 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmt7n" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.645227 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4t4sf" event={"ID":"ec4831bb-4252-4d37-83f4-1b9e4f88ea35","Type":"ContainerStarted","Data":"2c4d613edf3072f57c8bc6853f13895ae065d8064a45ffaa439a82c57141bc86"} Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.648782 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.50396777 podStartE2EDuration="1m33.648760073s" podCreationTimestamp="2025-11-28 17:20:19 +0000 UTC" firstStartedPulling="2025-11-28 17:20:38.703201345 +0000 UTC m=+1340.752122250" lastFinishedPulling="2025-11-28 17:21:51.847993648 +0000 UTC m=+1413.896914553" observedRunningTime="2025-11-28 17:21:52.621915907 +0000 UTC m=+1414.670836822" watchObservedRunningTime="2025-11-28 17:21:52.648760073 +0000 UTC m=+1414.697680978" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.653249 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5tcbk" event={"ID":"c97879e5-b703-4517-bdef-ff788259266f","Type":"ContainerDied","Data":"47e150a93477b53f93f1b0ef57881a8acf748160315043e10118a3749b735479"} Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.653455 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47e150a93477b53f93f1b0ef57881a8acf748160315043e10118a3749b735479" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.653577 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5tcbk" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.667352 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b99cs" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.667365 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b99cs" event={"ID":"770b6c25-63f4-4690-9a2e-b64f74e86272","Type":"ContainerDied","Data":"a79cc7c3f506eb9633611afbeccb69d224e3fb3e81d8463ac69b5abd89521e24"} Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.667486 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a79cc7c3f506eb9633611afbeccb69d224e3fb3e81d8463ac69b5abd89521e24" Nov 28 17:21:52 crc kubenswrapper[5024]: I1128 17:21:52.682682 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4t4sf" podStartSLOduration=4.575827139 podStartE2EDuration="18.682665316s" podCreationTimestamp="2025-11-28 17:21:34 +0000 UTC" firstStartedPulling="2025-11-28 17:21:37.742943168 +0000 UTC m=+1399.791864063" lastFinishedPulling="2025-11-28 17:21:51.849781335 +0000 UTC m=+1413.898702240" observedRunningTime="2025-11-28 17:21:52.679899063 +0000 UTC m=+1414.728819968" watchObservedRunningTime="2025-11-28 17:21:52.682665316 +0000 UTC m=+1414.731586211" Nov 28 17:21:52 crc kubenswrapper[5024]: E1128 17:21:52.714349 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-ppx6b" podUID="c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6" Nov 28 17:21:54 crc kubenswrapper[5024]: I1128 17:21:54.723162 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"d57d4dc5bf27a996ad6bd9e1a4c8439250997175db21579a182dafb3872374b1"} Nov 28 17:21:54 crc kubenswrapper[5024]: I1128 17:21:54.723731 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"c0d5d20551bb54a2c269972248b2b86fc8ebd785c2b35189c74467f96017006a"} Nov 28 17:21:54 crc kubenswrapper[5024]: I1128 17:21:54.723748 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"9b7ce0b4a31ed607c877d346f6838b744e9e1e6cafa08272b99a0ed176300579"} Nov 28 17:21:54 crc kubenswrapper[5024]: I1128 17:21:54.723758 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"0a2c2d2375b607f93e87eac40eb405fe6d0e246dfdd97f031548f3c385850cdf"} Nov 28 17:21:55 crc kubenswrapper[5024]: I1128 17:21:55.741332 5024 generic.go:334] "Generic (PLEG): container finished" podID="ec4831bb-4252-4d37-83f4-1b9e4f88ea35" containerID="2c4d613edf3072f57c8bc6853f13895ae065d8064a45ffaa439a82c57141bc86" exitCode=0 Nov 28 17:21:55 crc kubenswrapper[5024]: I1128 17:21:55.741427 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4t4sf" event={"ID":"ec4831bb-4252-4d37-83f4-1b9e4f88ea35","Type":"ContainerDied","Data":"2c4d613edf3072f57c8bc6853f13895ae065d8064a45ffaa439a82c57141bc86"} Nov 28 17:21:55 crc kubenswrapper[5024]: I1128 17:21:55.765470 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 28 17:21:56 crc kubenswrapper[5024]: I1128 17:21:56.753369 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"f251fde615040cdac2c5496517f978ec5ca0dd4c4d1e4c6263466115c98a056e"} Nov 28 17:21:56 crc kubenswrapper[5024]: I1128 17:21:56.753855 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"e8fb26cab395d7c26cd4f8e2c810df139a2f537e71fe9d096e6de271bcb6e11a"} Nov 28 17:21:56 crc kubenswrapper[5024]: I1128 17:21:56.753871 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"0df50d712d2580495ec4c535ffbab666fcd2fc9889ea80ca0bba6a39b1a27e8d"} Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.219249 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4t4sf" Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.287414 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxtrf\" (UniqueName: \"kubernetes.io/projected/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-kube-api-access-mxtrf\") pod \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\" (UID: \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\") " Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.287463 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-combined-ca-bundle\") pod \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\" (UID: \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\") " Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.287487 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-config-data\") pod \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\" (UID: \"ec4831bb-4252-4d37-83f4-1b9e4f88ea35\") " Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.306061 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-kube-api-access-mxtrf" (OuterVolumeSpecName: "kube-api-access-mxtrf") pod "ec4831bb-4252-4d37-83f4-1b9e4f88ea35" (UID: "ec4831bb-4252-4d37-83f4-1b9e4f88ea35"). InnerVolumeSpecName "kube-api-access-mxtrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.372338 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec4831bb-4252-4d37-83f4-1b9e4f88ea35" (UID: "ec4831bb-4252-4d37-83f4-1b9e4f88ea35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.380267 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-config-data" (OuterVolumeSpecName: "config-data") pod "ec4831bb-4252-4d37-83f4-1b9e4f88ea35" (UID: "ec4831bb-4252-4d37-83f4-1b9e4f88ea35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.389864 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxtrf\" (UniqueName: \"kubernetes.io/projected/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-kube-api-access-mxtrf\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.389896 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.389907 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4831bb-4252-4d37-83f4-1b9e4f88ea35-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.769151 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"409a1ba6d88c39c5a60e0d1b9cc7b764e59c880e814fff3e6601e7a87c038c80"} Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.769755 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"ddcaf6811231a17c68cbd5e94702e28168388c4c5c389bc79c4d4d0196e8434f"} Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.769792 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"dcd280677225cba0359073e15c2bb3219140ea9529d6a423daa7fc6089ee7035"} Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.769810 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"aa2554f8-7d4e-425d-a74a-3322dc09d7ed","Type":"ContainerStarted","Data":"4874e4b47266c67d5e1dc1bd642f9a823047030668f6d9e8cebded4ab2e50b7c"} Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.770876 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4t4sf" event={"ID":"ec4831bb-4252-4d37-83f4-1b9e4f88ea35","Type":"ContainerDied","Data":"c985875bfd21907de3616e447a62ba8f700ee17941725e201645899ab3f91238"} Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.770910 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c985875bfd21907de3616e447a62ba8f700ee17941725e201645899ab3f91238" Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.770919 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4t4sf" Nov 28 17:21:57 crc kubenswrapper[5024]: I1128 17:21:57.820937 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.276370208 podStartE2EDuration="58.820913955s" podCreationTimestamp="2025-11-28 17:20:59 +0000 UTC" firstStartedPulling="2025-11-28 17:21:35.251875197 +0000 UTC m=+1397.300796102" lastFinishedPulling="2025-11-28 17:21:55.796418944 +0000 UTC m=+1417.845339849" observedRunningTime="2025-11-28 17:21:57.81804755 +0000 UTC m=+1419.866968475" watchObservedRunningTime="2025-11-28 17:21:57.820913955 +0000 UTC m=+1419.869834860" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.056354 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-gdjnc"] Nov 28 17:21:58 crc kubenswrapper[5024]: E1128 17:21:58.057215 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="547243af-e537-4990-ba48-b668f5a87bb7" containerName="mariadb-database-create" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057235 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="547243af-e537-4990-ba48-b668f5a87bb7" containerName="mariadb-database-create" Nov 28 17:21:58 crc kubenswrapper[5024]: E1128 17:21:58.057266 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9848c031-a7cb-4f3e-804b-1142d6ddf3a4" containerName="mariadb-account-create-update" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057274 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9848c031-a7cb-4f3e-804b-1142d6ddf3a4" containerName="mariadb-account-create-update" Nov 28 17:21:58 crc kubenswrapper[5024]: E1128 17:21:58.057284 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eff2673-6be4-4fe9-b36d-c7ab184b1a14" containerName="mariadb-database-create" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057291 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eff2673-6be4-4fe9-b36d-c7ab184b1a14" containerName="mariadb-database-create" Nov 28 17:21:58 crc kubenswrapper[5024]: E1128 17:21:58.057313 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb03410d-e1f0-4036-81fc-76f81bf76340" containerName="mariadb-account-create-update" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057320 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb03410d-e1f0-4036-81fc-76f81bf76340" containerName="mariadb-account-create-update" Nov 28 17:21:58 crc kubenswrapper[5024]: E1128 17:21:58.057332 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770b6c25-63f4-4690-9a2e-b64f74e86272" containerName="mariadb-database-create" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057339 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="770b6c25-63f4-4690-9a2e-b64f74e86272" containerName="mariadb-database-create" Nov 28 17:21:58 crc kubenswrapper[5024]: E1128 17:21:58.057394 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97879e5-b703-4517-bdef-ff788259266f" containerName="mariadb-database-create" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057405 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97879e5-b703-4517-bdef-ff788259266f" containerName="mariadb-database-create" Nov 28 17:21:58 crc kubenswrapper[5024]: E1128 17:21:58.057426 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd4b169-ce4b-4b45-969a-7f73011edf61" containerName="mariadb-account-create-update" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057433 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd4b169-ce4b-4b45-969a-7f73011edf61" containerName="mariadb-account-create-update" Nov 28 17:21:58 crc kubenswrapper[5024]: E1128 17:21:58.057457 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4831bb-4252-4d37-83f4-1b9e4f88ea35" containerName="keystone-db-sync" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057464 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4831bb-4252-4d37-83f4-1b9e4f88ea35" containerName="keystone-db-sync" Nov 28 17:21:58 crc kubenswrapper[5024]: E1128 17:21:58.057476 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9fa01bb-f5e1-437f-b417-f201ad7b2fad" containerName="mariadb-account-create-update" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057482 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9fa01bb-f5e1-437f-b417-f201ad7b2fad" containerName="mariadb-account-create-update" Nov 28 17:21:58 crc kubenswrapper[5024]: E1128 17:21:58.057494 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e25b56-9b79-4a1c-ac2f-678b370669dd" containerName="ovn-config" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057500 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e25b56-9b79-4a1c-ac2f-678b370669dd" containerName="ovn-config" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057699 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="547243af-e537-4990-ba48-b668f5a87bb7" containerName="mariadb-database-create" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057715 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4831bb-4252-4d37-83f4-1b9e4f88ea35" containerName="keystone-db-sync" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057723 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd4b169-ce4b-4b45-969a-7f73011edf61" containerName="mariadb-account-create-update" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057730 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97879e5-b703-4517-bdef-ff788259266f" containerName="mariadb-database-create" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057741 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eff2673-6be4-4fe9-b36d-c7ab184b1a14" containerName="mariadb-database-create" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057749 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e25b56-9b79-4a1c-ac2f-678b370669dd" containerName="ovn-config" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057768 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9fa01bb-f5e1-437f-b417-f201ad7b2fad" containerName="mariadb-account-create-update" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057777 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="770b6c25-63f4-4690-9a2e-b64f74e86272" containerName="mariadb-database-create" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057783 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb03410d-e1f0-4036-81fc-76f81bf76340" containerName="mariadb-account-create-update" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.057795 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="9848c031-a7cb-4f3e-804b-1142d6ddf3a4" containerName="mariadb-account-create-update" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.058907 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.085299 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-gdjnc"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.103639 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-config\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.103740 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-dns-svc\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.103842 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.103878 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.104279 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ksr6\" (UniqueName: \"kubernetes.io/projected/09675bf1-3898-4f82-9e24-eae8ffe02238-kube-api-access-4ksr6\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.146092 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2t72d"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.147778 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.153564 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.153899 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.154208 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.154345 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7sbwz" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.154619 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.168533 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2t72d"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.206209 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-config\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.206457 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-config-data\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.206570 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-dns-svc\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.206642 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-scripts\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.206766 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.206839 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.206919 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-fernet-keys\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.206990 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-credential-keys\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.207146 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-combined-ca-bundle\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.207247 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ksr6\" (UniqueName: \"kubernetes.io/projected/09675bf1-3898-4f82-9e24-eae8ffe02238-kube-api-access-4ksr6\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.207323 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gftwg\" (UniqueName: \"kubernetes.io/projected/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-kube-api-access-gftwg\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.207381 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-config\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.208127 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.208332 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.210095 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-dns-svc\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.247202 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ksr6\" (UniqueName: \"kubernetes.io/projected/09675bf1-3898-4f82-9e24-eae8ffe02238-kube-api-access-4ksr6\") pod \"dnsmasq-dns-f877ddd87-gdjnc\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.258035 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-gdjnc"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.258982 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.283594 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-d8xkh"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.290567 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.294745 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.309994 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gftwg\" (UniqueName: \"kubernetes.io/projected/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-kube-api-access-gftwg\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.310207 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-config-data\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.310340 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-scripts\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.310550 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-credential-keys\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.310636 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-fernet-keys\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.310803 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-combined-ca-bundle\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.323501 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-credential-keys\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.330848 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-combined-ca-bundle\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.338202 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-d8xkh"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.342306 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-fernet-keys\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.346330 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-scripts\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.354001 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-config-data\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.365815 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gftwg\" (UniqueName: \"kubernetes.io/projected/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-kube-api-access-gftwg\") pod \"keystone-bootstrap-2t72d\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.390526 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-gsz7r"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.394621 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gsz7r" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.406975 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-hl9cn" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.407201 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.417597 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-gsz7r"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.418793 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.418834 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.419159 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-config\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.419223 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.419278 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-dns-svc\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.419379 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqjsn\" (UniqueName: \"kubernetes.io/projected/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-kube-api-access-gqjsn\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.480151 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-bkwj2"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.488315 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.493868 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2crv7" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.494233 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.494521 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.497268 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.524166 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-config\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.524229 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.524265 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-dns-svc\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.524298 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs2gd\" (UniqueName: \"kubernetes.io/projected/a2b6fe11-1216-4090-b1eb-fb7516bd0977-kube-api-access-bs2gd\") pod \"heat-db-sync-gsz7r\" (UID: \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\") " pod="openstack/heat-db-sync-gsz7r" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.524353 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqjsn\" (UniqueName: \"kubernetes.io/projected/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-kube-api-access-gqjsn\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.524392 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b6fe11-1216-4090-b1eb-fb7516bd0977-config-data\") pod \"heat-db-sync-gsz7r\" (UID: \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\") " pod="openstack/heat-db-sync-gsz7r" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.524548 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b6fe11-1216-4090-b1eb-fb7516bd0977-combined-ca-bundle\") pod \"heat-db-sync-gsz7r\" (UID: \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\") " pod="openstack/heat-db-sync-gsz7r" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.524602 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.524628 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.525696 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.526747 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.527288 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-dns-svc\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.527930 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.534590 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-config\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.602278 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-bkwj2"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.602334 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-llgqk"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.603958 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-llgqk"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.604091 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-llgqk" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.610784 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-8bjh2"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.612917 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.613277 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hs4gh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.613833 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.618526 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8bjh2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.625076 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tptfj" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.625331 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.626714 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqjsn\" (UniqueName: \"kubernetes.io/projected/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-kube-api-access-gqjsn\") pod \"dnsmasq-dns-5959f8865f-d8xkh\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.626918 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b6fe11-1216-4090-b1eb-fb7516bd0977-combined-ca-bundle\") pod \"heat-db-sync-gsz7r\" (UID: \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\") " pod="openstack/heat-db-sync-gsz7r" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.627267 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs2gd\" (UniqueName: \"kubernetes.io/projected/a2b6fe11-1216-4090-b1eb-fb7516bd0977-kube-api-access-bs2gd\") pod \"heat-db-sync-gsz7r\" (UID: \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\") " pod="openstack/heat-db-sync-gsz7r" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.627306 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-scripts\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.627401 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-db-sync-config-data\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.627451 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b6fe11-1216-4090-b1eb-fb7516bd0977-config-data\") pod \"heat-db-sync-gsz7r\" (UID: \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\") " pod="openstack/heat-db-sync-gsz7r" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.627494 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-config-data\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.627553 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p74kh\" (UniqueName: \"kubernetes.io/projected/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-kube-api-access-p74kh\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.627701 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-etc-machine-id\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.628001 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-combined-ca-bundle\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.673848 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b6fe11-1216-4090-b1eb-fb7516bd0977-combined-ca-bundle\") pod \"heat-db-sync-gsz7r\" (UID: \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\") " pod="openstack/heat-db-sync-gsz7r" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.674816 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b6fe11-1216-4090-b1eb-fb7516bd0977-config-data\") pod \"heat-db-sync-gsz7r\" (UID: \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\") " pod="openstack/heat-db-sync-gsz7r" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.679657 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs2gd\" (UniqueName: \"kubernetes.io/projected/a2b6fe11-1216-4090-b1eb-fb7516bd0977-kube-api-access-bs2gd\") pod \"heat-db-sync-gsz7r\" (UID: \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\") " pod="openstack/heat-db-sync-gsz7r" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.683492 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-8bjh2"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.701195 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-d8xkh"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.703819 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.744288 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tgknw"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.753317 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7446dd9c-45ba-43bc-9160-5f39384e542a-config\") pod \"neutron-db-sync-llgqk\" (UID: \"7446dd9c-45ba-43bc-9160-5f39384e542a\") " pod="openstack/neutron-db-sync-llgqk" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.753452 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-config-data\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.753513 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72qwf\" (UniqueName: \"kubernetes.io/projected/914b00e1-817d-4776-ae89-1c824e7410bd-kube-api-access-72qwf\") pod \"barbican-db-sync-8bjh2\" (UID: \"914b00e1-817d-4776-ae89-1c824e7410bd\") " pod="openstack/barbican-db-sync-8bjh2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.755967 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p74kh\" (UniqueName: \"kubernetes.io/projected/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-kube-api-access-p74kh\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.757493 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-etc-machine-id\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.756089 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-etc-machine-id\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.760296 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/914b00e1-817d-4776-ae89-1c824e7410bd-db-sync-config-data\") pod \"barbican-db-sync-8bjh2\" (UID: \"914b00e1-817d-4776-ae89-1c824e7410bd\") " pod="openstack/barbican-db-sync-8bjh2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.760934 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-combined-ca-bundle\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.761582 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d748r\" (UniqueName: \"kubernetes.io/projected/7446dd9c-45ba-43bc-9160-5f39384e542a-kube-api-access-d748r\") pod \"neutron-db-sync-llgqk\" (UID: \"7446dd9c-45ba-43bc-9160-5f39384e542a\") " pod="openstack/neutron-db-sync-llgqk" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.762008 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7446dd9c-45ba-43bc-9160-5f39384e542a-combined-ca-bundle\") pod \"neutron-db-sync-llgqk\" (UID: \"7446dd9c-45ba-43bc-9160-5f39384e542a\") " pod="openstack/neutron-db-sync-llgqk" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.762438 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-scripts\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.762660 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914b00e1-817d-4776-ae89-1c824e7410bd-combined-ca-bundle\") pod \"barbican-db-sync-8bjh2\" (UID: \"914b00e1-817d-4776-ae89-1c824e7410bd\") " pod="openstack/barbican-db-sync-8bjh2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.762719 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-db-sync-config-data\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.763829 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.778142 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-62wk2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.778244 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.778404 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.788171 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p74kh\" (UniqueName: \"kubernetes.io/projected/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-kube-api-access-p74kh\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.788759 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-combined-ca-bundle\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.820791 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gsz7r" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.821809 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-db-sync-config-data\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.823175 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-scripts\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.876684 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-config-data\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.877276 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7446dd9c-45ba-43bc-9160-5f39384e542a-combined-ca-bundle\") pod \"neutron-db-sync-llgqk\" (UID: \"7446dd9c-45ba-43bc-9160-5f39384e542a\") " pod="openstack/neutron-db-sync-llgqk" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.877403 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da020492-bf03-4191-aa2b-e335ac55f7b3-logs\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.877532 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-scripts\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.877653 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914b00e1-817d-4776-ae89-1c824e7410bd-combined-ca-bundle\") pod \"barbican-db-sync-8bjh2\" (UID: \"914b00e1-817d-4776-ae89-1c824e7410bd\") " pod="openstack/barbican-db-sync-8bjh2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.877776 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqpr9\" (UniqueName: \"kubernetes.io/projected/da020492-bf03-4191-aa2b-e335ac55f7b3-kube-api-access-vqpr9\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.877899 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7446dd9c-45ba-43bc-9160-5f39384e542a-config\") pod \"neutron-db-sync-llgqk\" (UID: \"7446dd9c-45ba-43bc-9160-5f39384e542a\") " pod="openstack/neutron-db-sync-llgqk" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.878014 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72qwf\" (UniqueName: \"kubernetes.io/projected/914b00e1-817d-4776-ae89-1c824e7410bd-kube-api-access-72qwf\") pod \"barbican-db-sync-8bjh2\" (UID: \"914b00e1-817d-4776-ae89-1c824e7410bd\") " pod="openstack/barbican-db-sync-8bjh2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.878145 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/914b00e1-817d-4776-ae89-1c824e7410bd-db-sync-config-data\") pod \"barbican-db-sync-8bjh2\" (UID: \"914b00e1-817d-4776-ae89-1c824e7410bd\") " pod="openstack/barbican-db-sync-8bjh2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.878258 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-combined-ca-bundle\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.878368 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d748r\" (UniqueName: \"kubernetes.io/projected/7446dd9c-45ba-43bc-9160-5f39384e542a-kube-api-access-d748r\") pod \"neutron-db-sync-llgqk\" (UID: \"7446dd9c-45ba-43bc-9160-5f39384e542a\") " pod="openstack/neutron-db-sync-llgqk" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.890746 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tgknw"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.898917 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914b00e1-817d-4776-ae89-1c824e7410bd-combined-ca-bundle\") pod \"barbican-db-sync-8bjh2\" (UID: \"914b00e1-817d-4776-ae89-1c824e7410bd\") " pod="openstack/barbican-db-sync-8bjh2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.913970 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7446dd9c-45ba-43bc-9160-5f39384e542a-config\") pod \"neutron-db-sync-llgqk\" (UID: \"7446dd9c-45ba-43bc-9160-5f39384e542a\") " pod="openstack/neutron-db-sync-llgqk" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.917919 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d748r\" (UniqueName: \"kubernetes.io/projected/7446dd9c-45ba-43bc-9160-5f39384e542a-kube-api-access-d748r\") pod \"neutron-db-sync-llgqk\" (UID: \"7446dd9c-45ba-43bc-9160-5f39384e542a\") " pod="openstack/neutron-db-sync-llgqk" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.937426 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72qwf\" (UniqueName: \"kubernetes.io/projected/914b00e1-817d-4776-ae89-1c824e7410bd-kube-api-access-72qwf\") pod \"barbican-db-sync-8bjh2\" (UID: \"914b00e1-817d-4776-ae89-1c824e7410bd\") " pod="openstack/barbican-db-sync-8bjh2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.941207 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bnvhr"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.955075 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bnvhr"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.955202 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.963562 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/914b00e1-817d-4776-ae89-1c824e7410bd-db-sync-config-data\") pod \"barbican-db-sync-8bjh2\" (UID: \"914b00e1-817d-4776-ae89-1c824e7410bd\") " pod="openstack/barbican-db-sync-8bjh2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.963819 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-config-data\") pod \"cinder-db-sync-bkwj2\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.963907 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7446dd9c-45ba-43bc-9160-5f39384e542a-combined-ca-bundle\") pod \"neutron-db-sync-llgqk\" (UID: \"7446dd9c-45ba-43bc-9160-5f39384e542a\") " pod="openstack/neutron-db-sync-llgqk" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.970905 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.976337 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.980560 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-combined-ca-bundle\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.980667 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-config-data\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.980730 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da020492-bf03-4191-aa2b-e335ac55f7b3-logs\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.980769 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-scripts\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.980800 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqpr9\" (UniqueName: \"kubernetes.io/projected/da020492-bf03-4191-aa2b-e335ac55f7b3-kube-api-access-vqpr9\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.983449 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.986069 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.990928 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da020492-bf03-4191-aa2b-e335ac55f7b3-logs\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.991868 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-llgqk" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.992610 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.992842 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:21:58 crc kubenswrapper[5024]: I1128 17:21:58.995889 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-scripts\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:58.997746 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-combined-ca-bundle\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.002656 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-config-data\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.037469 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8bjh2" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.040456 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqpr9\" (UniqueName: \"kubernetes.io/projected/da020492-bf03-4191-aa2b-e335ac55f7b3-kube-api-access-vqpr9\") pod \"placement-db-sync-tgknw\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.084186 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.084356 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.084631 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-config-data\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.084676 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-scripts\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.084704 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mtwc\" (UniqueName: \"kubernetes.io/projected/32ab0e88-ae1b-4f41-9301-d419935f30df-kube-api-access-8mtwc\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.084746 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.084773 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.084802 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-log-httpd\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.084820 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.084839 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx4mx\" (UniqueName: \"kubernetes.io/projected/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-kube-api-access-jx4mx\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.084881 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-config\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.084968 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-run-httpd\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.085077 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.117839 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tgknw" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.189950 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mtwc\" (UniqueName: \"kubernetes.io/projected/32ab0e88-ae1b-4f41-9301-d419935f30df-kube-api-access-8mtwc\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.190247 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.190273 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.190310 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-log-httpd\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.190331 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.190351 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx4mx\" (UniqueName: \"kubernetes.io/projected/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-kube-api-access-jx4mx\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.190391 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-config\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.190423 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-run-httpd\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.190483 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.190510 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.190538 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.190594 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-config-data\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.190626 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-scripts\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.191303 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.191398 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-run-httpd\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.191695 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-log-httpd\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.191946 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-config\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.192661 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.192721 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.193009 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.228077 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-scripts\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.231972 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.235105 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mtwc\" (UniqueName: \"kubernetes.io/projected/32ab0e88-ae1b-4f41-9301-d419935f30df-kube-api-access-8mtwc\") pod \"dnsmasq-dns-58dd9ff6bc-bnvhr\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.237451 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx4mx\" (UniqueName: \"kubernetes.io/projected/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-kube-api-access-jx4mx\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.237660 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.238551 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-config-data\") pod \"ceilometer-0\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.349060 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-gdjnc"] Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.486783 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.506353 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.650199 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2t72d"] Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.855004 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-gsz7r"] Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.858037 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2t72d" event={"ID":"20a361dd-ee2a-4532-8a70-db8ea77f8cbc","Type":"ContainerStarted","Data":"5884875a0d78c87d23379108810bd8f5317d0682c60657537285f7ba482f8086"} Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.874779 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" event={"ID":"09675bf1-3898-4f82-9e24-eae8ffe02238","Type":"ContainerStarted","Data":"b0b2bc83cefddd695b7491fb124ccc1139672b869041162c3e2d601483c3a24a"} Nov 28 17:21:59 crc kubenswrapper[5024]: I1128 17:21:59.891959 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-d8xkh"] Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.279832 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-llgqk"] Nov 28 17:22:00 crc kubenswrapper[5024]: W1128 17:22:00.333632 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda020492_bf03_4191_aa2b_e335ac55f7b3.slice/crio-f52c54c917e6c6b7b685d222354b146461c970587bb83a8baf05bf9476d1cf28 WatchSource:0}: Error finding container f52c54c917e6c6b7b685d222354b146461c970587bb83a8baf05bf9476d1cf28: Status 404 returned error can't find the container with id f52c54c917e6c6b7b685d222354b146461c970587bb83a8baf05bf9476d1cf28 Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.341953 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tgknw"] Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.374339 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-8bjh2"] Nov 28 17:22:00 crc kubenswrapper[5024]: W1128 17:22:00.375453 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod914b00e1_817d_4776_ae89_1c824e7410bd.slice/crio-23fa7ede0cec448beaa43d4fc38a7ca8a769d8b882e33a59997195dca7baac65 WatchSource:0}: Error finding container 23fa7ede0cec448beaa43d4fc38a7ca8a769d8b882e33a59997195dca7baac65: Status 404 returned error can't find the container with id 23fa7ede0cec448beaa43d4fc38a7ca8a769d8b882e33a59997195dca7baac65 Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.405389 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-bkwj2"] Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.495736 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bnvhr"] Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.535541 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.901993 5024 generic.go:334] "Generic (PLEG): container finished" podID="a87afa23-98df-46e9-9f0b-ec01a9c32d5d" containerID="9317ca52e31e76b9cb1d92d1ba4dafe608cb8f01bd53a483b33840d342fecb26" exitCode=0 Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.902054 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" event={"ID":"a87afa23-98df-46e9-9f0b-ec01a9c32d5d","Type":"ContainerDied","Data":"9317ca52e31e76b9cb1d92d1ba4dafe608cb8f01bd53a483b33840d342fecb26"} Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.902566 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" event={"ID":"a87afa23-98df-46e9-9f0b-ec01a9c32d5d","Type":"ContainerStarted","Data":"8d00cfe7a8b27670f3c461de433b1e6fcea161668346481b6e70b495c20b2be9"} Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.914484 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a","Type":"ContainerStarted","Data":"8de2cac2887df93774ff3cdf0b2a521989e7dd9f2a06777772d5980037a00a12"} Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.926621 5024 generic.go:334] "Generic (PLEG): container finished" podID="09675bf1-3898-4f82-9e24-eae8ffe02238" containerID="2d08ac8ebcc0e7a934baf125cbc1bd0393b8a44404656f4a0f9bd1881c2a3a67" exitCode=0 Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.926775 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" event={"ID":"09675bf1-3898-4f82-9e24-eae8ffe02238","Type":"ContainerDied","Data":"2d08ac8ebcc0e7a934baf125cbc1bd0393b8a44404656f4a0f9bd1881c2a3a67"} Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.948331 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" event={"ID":"32ab0e88-ae1b-4f41-9301-d419935f30df","Type":"ContainerStarted","Data":"9204937e03af9e8aba22aec0742518e68df4c758219fb993df88ecdebd32f4f8"} Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.974983 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bkwj2" event={"ID":"92cbe84b-cd7a-4f20-8aab-92fd90f0c939","Type":"ContainerStarted","Data":"e12a8200e157b087dd5570d16453c6dfc8cb94033e7f575cccd5f30f2db3d85a"} Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.988907 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-llgqk" event={"ID":"7446dd9c-45ba-43bc-9160-5f39384e542a","Type":"ContainerStarted","Data":"d6e98fcdf95de3cf248a5c6a4ae214279476b78d6a5c6740764948bd57a14405"} Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.988965 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-llgqk" event={"ID":"7446dd9c-45ba-43bc-9160-5f39384e542a","Type":"ContainerStarted","Data":"fbe849f6bcf755086c569e7e0d37a5f711bc862461aa52a8c95b01a5160bcd59"} Nov 28 17:22:00 crc kubenswrapper[5024]: I1128 17:22:00.998678 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gsz7r" event={"ID":"a2b6fe11-1216-4090-b1eb-fb7516bd0977","Type":"ContainerStarted","Data":"7c35306d5adf35b79f310d9d10bbe9437863f015580266837bb30874d0055757"} Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.022171 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tgknw" event={"ID":"da020492-bf03-4191-aa2b-e335ac55f7b3","Type":"ContainerStarted","Data":"f52c54c917e6c6b7b685d222354b146461c970587bb83a8baf05bf9476d1cf28"} Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.032466 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-llgqk" podStartSLOduration=3.032446056 podStartE2EDuration="3.032446056s" podCreationTimestamp="2025-11-28 17:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:22:01.026599802 +0000 UTC m=+1423.075520707" watchObservedRunningTime="2025-11-28 17:22:01.032446056 +0000 UTC m=+1423.081366961" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.037279 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2t72d" event={"ID":"20a361dd-ee2a-4532-8a70-db8ea77f8cbc","Type":"ContainerStarted","Data":"cb5042ec4d2a9b6dcd9182dd7d36a7d8993c984c37eb1ffeba6cb799e1f9b6ab"} Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.056217 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8bjh2" event={"ID":"914b00e1-817d-4776-ae89-1c824e7410bd","Type":"ContainerStarted","Data":"23fa7ede0cec448beaa43d4fc38a7ca8a769d8b882e33a59997195dca7baac65"} Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.071440 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2t72d" podStartSLOduration=3.071420812 podStartE2EDuration="3.071420812s" podCreationTimestamp="2025-11-28 17:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:22:01.064541631 +0000 UTC m=+1423.113462536" watchObservedRunningTime="2025-11-28 17:22:01.071420812 +0000 UTC m=+1423.120341717" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.394606 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.582833 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.697828 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ksr6\" (UniqueName: \"kubernetes.io/projected/09675bf1-3898-4f82-9e24-eae8ffe02238-kube-api-access-4ksr6\") pod \"09675bf1-3898-4f82-9e24-eae8ffe02238\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.698145 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-dns-svc\") pod \"09675bf1-3898-4f82-9e24-eae8ffe02238\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.698212 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-ovsdbserver-nb\") pod \"09675bf1-3898-4f82-9e24-eae8ffe02238\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.698249 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-config\") pod \"09675bf1-3898-4f82-9e24-eae8ffe02238\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.698274 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-ovsdbserver-sb\") pod \"09675bf1-3898-4f82-9e24-eae8ffe02238\" (UID: \"09675bf1-3898-4f82-9e24-eae8ffe02238\") " Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.711532 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09675bf1-3898-4f82-9e24-eae8ffe02238-kube-api-access-4ksr6" (OuterVolumeSpecName: "kube-api-access-4ksr6") pod "09675bf1-3898-4f82-9e24-eae8ffe02238" (UID: "09675bf1-3898-4f82-9e24-eae8ffe02238"). InnerVolumeSpecName "kube-api-access-4ksr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.715817 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.729160 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09675bf1-3898-4f82-9e24-eae8ffe02238" (UID: "09675bf1-3898-4f82-9e24-eae8ffe02238"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.730828 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09675bf1-3898-4f82-9e24-eae8ffe02238" (UID: "09675bf1-3898-4f82-9e24-eae8ffe02238"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.736983 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-config" (OuterVolumeSpecName: "config") pod "09675bf1-3898-4f82-9e24-eae8ffe02238" (UID: "09675bf1-3898-4f82-9e24-eae8ffe02238"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.752075 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09675bf1-3898-4f82-9e24-eae8ffe02238" (UID: "09675bf1-3898-4f82-9e24-eae8ffe02238"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.800290 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.800326 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.800338 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.800345 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09675bf1-3898-4f82-9e24-eae8ffe02238-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.800355 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ksr6\" (UniqueName: \"kubernetes.io/projected/09675bf1-3898-4f82-9e24-eae8ffe02238-kube-api-access-4ksr6\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.902228 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-dns-svc\") pod \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.902292 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-ovsdbserver-nb\") pod \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.902324 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqjsn\" (UniqueName: \"kubernetes.io/projected/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-kube-api-access-gqjsn\") pod \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.902611 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-ovsdbserver-sb\") pod \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.902698 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-dns-swift-storage-0\") pod \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.902736 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-config\") pod \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\" (UID: \"a87afa23-98df-46e9-9f0b-ec01a9c32d5d\") " Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.909484 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-kube-api-access-gqjsn" (OuterVolumeSpecName: "kube-api-access-gqjsn") pod "a87afa23-98df-46e9-9f0b-ec01a9c32d5d" (UID: "a87afa23-98df-46e9-9f0b-ec01a9c32d5d"). InnerVolumeSpecName "kube-api-access-gqjsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.946669 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-config" (OuterVolumeSpecName: "config") pod "a87afa23-98df-46e9-9f0b-ec01a9c32d5d" (UID: "a87afa23-98df-46e9-9f0b-ec01a9c32d5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.946783 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a87afa23-98df-46e9-9f0b-ec01a9c32d5d" (UID: "a87afa23-98df-46e9-9f0b-ec01a9c32d5d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.965878 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a87afa23-98df-46e9-9f0b-ec01a9c32d5d" (UID: "a87afa23-98df-46e9-9f0b-ec01a9c32d5d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.969500 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a87afa23-98df-46e9-9f0b-ec01a9c32d5d" (UID: "a87afa23-98df-46e9-9f0b-ec01a9c32d5d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:01 crc kubenswrapper[5024]: I1128 17:22:01.998709 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a87afa23-98df-46e9-9f0b-ec01a9c32d5d" (UID: "a87afa23-98df-46e9-9f0b-ec01a9c32d5d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.005576 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.005604 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.005631 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqjsn\" (UniqueName: \"kubernetes.io/projected/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-kube-api-access-gqjsn\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.005640 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.005651 5024 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.005660 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a87afa23-98df-46e9-9f0b-ec01a9c32d5d-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.087244 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" event={"ID":"09675bf1-3898-4f82-9e24-eae8ffe02238","Type":"ContainerDied","Data":"b0b2bc83cefddd695b7491fb124ccc1139672b869041162c3e2d601483c3a24a"} Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.087269 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-gdjnc" Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.087297 5024 scope.go:117] "RemoveContainer" containerID="2d08ac8ebcc0e7a934baf125cbc1bd0393b8a44404656f4a0f9bd1881c2a3a67" Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.105161 5024 generic.go:334] "Generic (PLEG): container finished" podID="32ab0e88-ae1b-4f41-9301-d419935f30df" containerID="4d290a90292ff93a1583080f162ca4a0cc766bdace50735c3ffda47d59660a2c" exitCode=0 Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.105292 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" event={"ID":"32ab0e88-ae1b-4f41-9301-d419935f30df","Type":"ContainerDied","Data":"4d290a90292ff93a1583080f162ca4a0cc766bdace50735c3ffda47d59660a2c"} Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.109964 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.109965 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-d8xkh" event={"ID":"a87afa23-98df-46e9-9f0b-ec01a9c32d5d","Type":"ContainerDied","Data":"8d00cfe7a8b27670f3c461de433b1e6fcea161668346481b6e70b495c20b2be9"} Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.148397 5024 scope.go:117] "RemoveContainer" containerID="9317ca52e31e76b9cb1d92d1ba4dafe608cb8f01bd53a483b33840d342fecb26" Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.180872 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-gdjnc"] Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.192093 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-gdjnc"] Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.267546 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-d8xkh"] Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.278478 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-d8xkh"] Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.529388 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09675bf1-3898-4f82-9e24-eae8ffe02238" path="/var/lib/kubelet/pods/09675bf1-3898-4f82-9e24-eae8ffe02238/volumes" Nov 28 17:22:02 crc kubenswrapper[5024]: I1128 17:22:02.530527 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a87afa23-98df-46e9-9f0b-ec01a9c32d5d" path="/var/lib/kubelet/pods/a87afa23-98df-46e9-9f0b-ec01a9c32d5d/volumes" Nov 28 17:22:05 crc kubenswrapper[5024]: I1128 17:22:05.261797 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" event={"ID":"32ab0e88-ae1b-4f41-9301-d419935f30df","Type":"ContainerStarted","Data":"8c23faba98b605c9abe4db3008c58d98113e72a7823fd2e41f37b6282b2f14c1"} Nov 28 17:22:05 crc kubenswrapper[5024]: I1128 17:22:05.262797 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:22:05 crc kubenswrapper[5024]: I1128 17:22:05.291862 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" podStartSLOduration=7.291834603 podStartE2EDuration="7.291834603s" podCreationTimestamp="2025-11-28 17:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:22:05.284418033 +0000 UTC m=+1427.333338948" watchObservedRunningTime="2025-11-28 17:22:05.291834603 +0000 UTC m=+1427.340755508" Nov 28 17:22:05 crc kubenswrapper[5024]: I1128 17:22:05.765962 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:05 crc kubenswrapper[5024]: I1128 17:22:05.769772 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:06 crc kubenswrapper[5024]: I1128 17:22:06.274827 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:07 crc kubenswrapper[5024]: I1128 17:22:07.285513 5024 generic.go:334] "Generic (PLEG): container finished" podID="20a361dd-ee2a-4532-8a70-db8ea77f8cbc" containerID="cb5042ec4d2a9b6dcd9182dd7d36a7d8993c984c37eb1ffeba6cb799e1f9b6ab" exitCode=0 Nov 28 17:22:07 crc kubenswrapper[5024]: I1128 17:22:07.285589 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2t72d" event={"ID":"20a361dd-ee2a-4532-8a70-db8ea77f8cbc","Type":"ContainerDied","Data":"cb5042ec4d2a9b6dcd9182dd7d36a7d8993c984c37eb1ffeba6cb799e1f9b6ab"} Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.367750 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.369428 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="prometheus" containerID="cri-o://667f6207b0846c2aedd8b1a421128da49a0c1dbb6193ff0200162c220dcea269" gracePeriod=600 Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.369492 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="thanos-sidecar" containerID="cri-o://245862d5ab5795b3c5ec4ec9a9edb68b77d53cfb13a489aef2a8bfa828a46942" gracePeriod=600 Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.369563 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="config-reloader" containerID="cri-o://1a8d14a1d59e13c8a36e1679d66c11a5f7760f922d105ae85d2a4091202a5931" gracePeriod=600 Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.489728 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.549643 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-22p46"] Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.549885 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-22p46" podUID="49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" containerName="dnsmasq-dns" containerID="cri-o://55b0b60c3bba6dda4c197e053a1481f781982e775595fcdcd13b3cb84da6967a" gracePeriod=10 Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.844738 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.929152 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-config-data\") pod \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.929212 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-credential-keys\") pod \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.929305 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-scripts\") pod \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.929353 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-fernet-keys\") pod \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.929390 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gftwg\" (UniqueName: \"kubernetes.io/projected/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-kube-api-access-gftwg\") pod \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.929490 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-combined-ca-bundle\") pod \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\" (UID: \"20a361dd-ee2a-4532-8a70-db8ea77f8cbc\") " Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.935587 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "20a361dd-ee2a-4532-8a70-db8ea77f8cbc" (UID: "20a361dd-ee2a-4532-8a70-db8ea77f8cbc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.938006 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-scripts" (OuterVolumeSpecName: "scripts") pod "20a361dd-ee2a-4532-8a70-db8ea77f8cbc" (UID: "20a361dd-ee2a-4532-8a70-db8ea77f8cbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.938326 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "20a361dd-ee2a-4532-8a70-db8ea77f8cbc" (UID: "20a361dd-ee2a-4532-8a70-db8ea77f8cbc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.962176 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-config-data" (OuterVolumeSpecName: "config-data") pod "20a361dd-ee2a-4532-8a70-db8ea77f8cbc" (UID: "20a361dd-ee2a-4532-8a70-db8ea77f8cbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.965879 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20a361dd-ee2a-4532-8a70-db8ea77f8cbc" (UID: "20a361dd-ee2a-4532-8a70-db8ea77f8cbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:09 crc kubenswrapper[5024]: I1128 17:22:09.968751 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-kube-api-access-gftwg" (OuterVolumeSpecName: "kube-api-access-gftwg") pod "20a361dd-ee2a-4532-8a70-db8ea77f8cbc" (UID: "20a361dd-ee2a-4532-8a70-db8ea77f8cbc"). InnerVolumeSpecName "kube-api-access-gftwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.032574 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gftwg\" (UniqueName: \"kubernetes.io/projected/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-kube-api-access-gftwg\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.032619 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.032633 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.032646 5024 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.032658 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.032670 5024 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20a361dd-ee2a-4532-8a70-db8ea77f8cbc-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.332858 5024 generic.go:334] "Generic (PLEG): container finished" podID="49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" containerID="55b0b60c3bba6dda4c197e053a1481f781982e775595fcdcd13b3cb84da6967a" exitCode=0 Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.332938 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-22p46" event={"ID":"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b","Type":"ContainerDied","Data":"55b0b60c3bba6dda4c197e053a1481f781982e775595fcdcd13b3cb84da6967a"} Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.342176 5024 generic.go:334] "Generic (PLEG): container finished" podID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerID="245862d5ab5795b3c5ec4ec9a9edb68b77d53cfb13a489aef2a8bfa828a46942" exitCode=0 Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.342206 5024 generic.go:334] "Generic (PLEG): container finished" podID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerID="1a8d14a1d59e13c8a36e1679d66c11a5f7760f922d105ae85d2a4091202a5931" exitCode=0 Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.342216 5024 generic.go:334] "Generic (PLEG): container finished" podID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerID="667f6207b0846c2aedd8b1a421128da49a0c1dbb6193ff0200162c220dcea269" exitCode=0 Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.342281 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a8a5d6d-4404-4848-a8b9-d47cee1e350d","Type":"ContainerDied","Data":"245862d5ab5795b3c5ec4ec9a9edb68b77d53cfb13a489aef2a8bfa828a46942"} Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.342351 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a8a5d6d-4404-4848-a8b9-d47cee1e350d","Type":"ContainerDied","Data":"1a8d14a1d59e13c8a36e1679d66c11a5f7760f922d105ae85d2a4091202a5931"} Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.342366 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a8a5d6d-4404-4848-a8b9-d47cee1e350d","Type":"ContainerDied","Data":"667f6207b0846c2aedd8b1a421128da49a0c1dbb6193ff0200162c220dcea269"} Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.347366 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2t72d" event={"ID":"20a361dd-ee2a-4532-8a70-db8ea77f8cbc","Type":"ContainerDied","Data":"5884875a0d78c87d23379108810bd8f5317d0682c60657537285f7ba482f8086"} Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.347422 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5884875a0d78c87d23379108810bd8f5317d0682c60657537285f7ba482f8086" Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.347487 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2t72d" Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.771007 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.138:9090/-/ready\": dial tcp 10.217.0.138:9090: connect: connection refused" Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.898884 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-22p46" podUID="49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.985949 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2t72d"] Nov 28 17:22:10 crc kubenswrapper[5024]: I1128 17:22:10.996807 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2t72d"] Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.086078 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l8dtc"] Nov 28 17:22:11 crc kubenswrapper[5024]: E1128 17:22:11.086895 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a361dd-ee2a-4532-8a70-db8ea77f8cbc" containerName="keystone-bootstrap" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.086994 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a361dd-ee2a-4532-8a70-db8ea77f8cbc" containerName="keystone-bootstrap" Nov 28 17:22:11 crc kubenswrapper[5024]: E1128 17:22:11.087111 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09675bf1-3898-4f82-9e24-eae8ffe02238" containerName="init" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.087191 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="09675bf1-3898-4f82-9e24-eae8ffe02238" containerName="init" Nov 28 17:22:11 crc kubenswrapper[5024]: E1128 17:22:11.087287 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87afa23-98df-46e9-9f0b-ec01a9c32d5d" containerName="init" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.087369 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87afa23-98df-46e9-9f0b-ec01a9c32d5d" containerName="init" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.087715 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a361dd-ee2a-4532-8a70-db8ea77f8cbc" containerName="keystone-bootstrap" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.089043 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="09675bf1-3898-4f82-9e24-eae8ffe02238" containerName="init" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.089137 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a87afa23-98df-46e9-9f0b-ec01a9c32d5d" containerName="init" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.092595 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.095803 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7sbwz" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.095988 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.095850 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.097243 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.097873 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.098474 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l8dtc"] Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.167687 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cjmv\" (UniqueName: \"kubernetes.io/projected/0a0117fc-7c8f-485d-8e97-539af4f3046d-kube-api-access-9cjmv\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.167745 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-combined-ca-bundle\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.167835 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-scripts\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.167953 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-credential-keys\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.168009 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-fernet-keys\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.168066 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-config-data\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.270126 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-fernet-keys\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.270207 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-config-data\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.270237 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cjmv\" (UniqueName: \"kubernetes.io/projected/0a0117fc-7c8f-485d-8e97-539af4f3046d-kube-api-access-9cjmv\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.270261 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-combined-ca-bundle\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.270300 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-scripts\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.270399 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-credential-keys\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.280277 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-fernet-keys\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.281721 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-combined-ca-bundle\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.283550 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-scripts\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.284497 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-credential-keys\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.289741 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-config-data\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.307692 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cjmv\" (UniqueName: \"kubernetes.io/projected/0a0117fc-7c8f-485d-8e97-539af4f3046d-kube-api-access-9cjmv\") pod \"keystone-bootstrap-l8dtc\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:11 crc kubenswrapper[5024]: I1128 17:22:11.416915 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:12 crc kubenswrapper[5024]: I1128 17:22:12.512238 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20a361dd-ee2a-4532-8a70-db8ea77f8cbc" path="/var/lib/kubelet/pods/20a361dd-ee2a-4532-8a70-db8ea77f8cbc/volumes" Nov 28 17:22:15 crc kubenswrapper[5024]: I1128 17:22:15.765631 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.138:9090/-/ready\": dial tcp 10.217.0.138:9090: connect: connection refused" Nov 28 17:22:15 crc kubenswrapper[5024]: I1128 17:22:15.899486 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-22p46" podUID="49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Nov 28 17:22:18 crc kubenswrapper[5024]: E1128 17:22:18.343773 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 28 17:22:18 crc kubenswrapper[5024]: E1128 17:22:18.344287 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqpr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-tgknw_openstack(da020492-bf03-4191-aa2b-e335ac55f7b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:22:18 crc kubenswrapper[5024]: E1128 17:22:18.345485 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-tgknw" podUID="da020492-bf03-4191-aa2b-e335ac55f7b3" Nov 28 17:22:18 crc kubenswrapper[5024]: E1128 17:22:18.436682 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-tgknw" podUID="da020492-bf03-4191-aa2b-e335ac55f7b3" Nov 28 17:22:19 crc kubenswrapper[5024]: E1128 17:22:19.019857 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 28 17:22:19 crc kubenswrapper[5024]: E1128 17:22:19.020264 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9h8bh8ch659h74h66bh669h85h58bhbbh6bh649h8hchd6h56ch84h65ch675hcfh6ch5d4h688h65ch66bh4h5b5h668h76hb7hb8hc8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jx4mx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(01acb9ec-ac92-403c-a3fc-fcbf0e3b800a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:22:21 crc kubenswrapper[5024]: I1128 17:22:21.465510 5024 generic.go:334] "Generic (PLEG): container finished" podID="7446dd9c-45ba-43bc-9160-5f39384e542a" containerID="d6e98fcdf95de3cf248a5c6a4ae214279476b78d6a5c6740764948bd57a14405" exitCode=0 Nov 28 17:22:21 crc kubenswrapper[5024]: I1128 17:22:21.465621 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-llgqk" event={"ID":"7446dd9c-45ba-43bc-9160-5f39384e542a","Type":"ContainerDied","Data":"d6e98fcdf95de3cf248a5c6a4ae214279476b78d6a5c6740764948bd57a14405"} Nov 28 17:22:23 crc kubenswrapper[5024]: I1128 17:22:23.766782 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.138:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 17:22:23 crc kubenswrapper[5024]: I1128 17:22:23.767538 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:25 crc kubenswrapper[5024]: I1128 17:22:25.898570 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-22p46" podUID="49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Nov 28 17:22:25 crc kubenswrapper[5024]: I1128 17:22:25.898970 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:22:28 crc kubenswrapper[5024]: I1128 17:22:28.768194 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.138:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 17:22:28 crc kubenswrapper[5024]: E1128 17:22:28.821276 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 28 17:22:28 crc kubenswrapper[5024]: E1128 17:22:28.821457 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bs2gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-gsz7r_openstack(a2b6fe11-1216-4090-b1eb-fb7516bd0977): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:22:28 crc kubenswrapper[5024]: E1128 17:22:28.823286 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-gsz7r" podUID="a2b6fe11-1216-4090-b1eb-fb7516bd0977" Nov 28 17:22:28 crc kubenswrapper[5024]: I1128 17:22:28.942054 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:22:28 crc kubenswrapper[5024]: I1128 17:22:28.952765 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:28 crc kubenswrapper[5024]: I1128 17:22:28.956423 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-llgqk" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.026486 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d748r\" (UniqueName: \"kubernetes.io/projected/7446dd9c-45ba-43bc-9160-5f39384e542a-kube-api-access-d748r\") pod \"7446dd9c-45ba-43bc-9160-5f39384e542a\" (UID: \"7446dd9c-45ba-43bc-9160-5f39384e542a\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.026576 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.026609 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-config\") pod \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.026670 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-thanos-prometheus-http-client-file\") pod \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.026714 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-tls-assets\") pod \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.027742 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plr5t\" (UniqueName: \"kubernetes.io/projected/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-kube-api-access-plr5t\") pod \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.028241 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-web-config\") pod \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.028288 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7446dd9c-45ba-43bc-9160-5f39384e542a-config\") pod \"7446dd9c-45ba-43bc-9160-5f39384e542a\" (UID: \"7446dd9c-45ba-43bc-9160-5f39384e542a\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.028338 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctgv8\" (UniqueName: \"kubernetes.io/projected/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-kube-api-access-ctgv8\") pod \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.028359 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-prometheus-metric-storage-rulefiles-0\") pod \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.028397 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-config-out\") pod \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\" (UID: \"2a8a5d6d-4404-4848-a8b9-d47cee1e350d\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.028417 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-config\") pod \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.028504 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7446dd9c-45ba-43bc-9160-5f39384e542a-combined-ca-bundle\") pod \"7446dd9c-45ba-43bc-9160-5f39384e542a\" (UID: \"7446dd9c-45ba-43bc-9160-5f39384e542a\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.028534 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-ovsdbserver-sb\") pod \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.028551 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-dns-svc\") pod \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.028601 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-ovsdbserver-nb\") pod \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\" (UID: \"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b\") " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.028954 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "2a8a5d6d-4404-4848-a8b9-d47cee1e350d" (UID: "2a8a5d6d-4404-4848-a8b9-d47cee1e350d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.029530 5024 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.040334 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-kube-api-access-plr5t" (OuterVolumeSpecName: "kube-api-access-plr5t") pod "49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" (UID: "49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b"). InnerVolumeSpecName "kube-api-access-plr5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.040362 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "2a8a5d6d-4404-4848-a8b9-d47cee1e350d" (UID: "2a8a5d6d-4404-4848-a8b9-d47cee1e350d"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.040447 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "2a8a5d6d-4404-4848-a8b9-d47cee1e350d" (UID: "2a8a5d6d-4404-4848-a8b9-d47cee1e350d"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.040711 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7446dd9c-45ba-43bc-9160-5f39384e542a-kube-api-access-d748r" (OuterVolumeSpecName: "kube-api-access-d748r") pod "7446dd9c-45ba-43bc-9160-5f39384e542a" (UID: "7446dd9c-45ba-43bc-9160-5f39384e542a"). InnerVolumeSpecName "kube-api-access-d748r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.044137 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-config-out" (OuterVolumeSpecName: "config-out") pod "2a8a5d6d-4404-4848-a8b9-d47cee1e350d" (UID: "2a8a5d6d-4404-4848-a8b9-d47cee1e350d"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.049107 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-kube-api-access-ctgv8" (OuterVolumeSpecName: "kube-api-access-ctgv8") pod "2a8a5d6d-4404-4848-a8b9-d47cee1e350d" (UID: "2a8a5d6d-4404-4848-a8b9-d47cee1e350d"). InnerVolumeSpecName "kube-api-access-ctgv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.067221 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-config" (OuterVolumeSpecName: "config") pod "2a8a5d6d-4404-4848-a8b9-d47cee1e350d" (UID: "2a8a5d6d-4404-4848-a8b9-d47cee1e350d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.067362 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7446dd9c-45ba-43bc-9160-5f39384e542a-config" (OuterVolumeSpecName: "config") pod "7446dd9c-45ba-43bc-9160-5f39384e542a" (UID: "7446dd9c-45ba-43bc-9160-5f39384e542a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.084277 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-web-config" (OuterVolumeSpecName: "web-config") pod "2a8a5d6d-4404-4848-a8b9-d47cee1e350d" (UID: "2a8a5d6d-4404-4848-a8b9-d47cee1e350d"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.084338 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7446dd9c-45ba-43bc-9160-5f39384e542a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7446dd9c-45ba-43bc-9160-5f39384e542a" (UID: "7446dd9c-45ba-43bc-9160-5f39384e542a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.085440 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "2a8a5d6d-4404-4848-a8b9-d47cee1e350d" (UID: "2a8a5d6d-4404-4848-a8b9-d47cee1e350d"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.099473 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-config" (OuterVolumeSpecName: "config") pod "49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" (UID: "49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.114490 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" (UID: "49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.117524 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" (UID: "49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.122416 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" (UID: "49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131152 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plr5t\" (UniqueName: \"kubernetes.io/projected/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-kube-api-access-plr5t\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131186 5024 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-web-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131198 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7446dd9c-45ba-43bc-9160-5f39384e542a-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131211 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctgv8\" (UniqueName: \"kubernetes.io/projected/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-kube-api-access-ctgv8\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131220 5024 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-config-out\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131230 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131238 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7446dd9c-45ba-43bc-9160-5f39384e542a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131246 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131255 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131263 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131271 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d748r\" (UniqueName: \"kubernetes.io/projected/7446dd9c-45ba-43bc-9160-5f39384e542a-kube-api-access-d748r\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131302 5024 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131313 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131323 5024 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.131333 5024 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2a8a5d6d-4404-4848-a8b9-d47cee1e350d-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.159790 5024 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.232693 5024 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.586922 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-22p46" event={"ID":"49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b","Type":"ContainerDied","Data":"445331f540351ad5924a2765b9409848cdcc4a7264e656051ddd4839f76101ed"} Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.586996 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-22p46" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.587003 5024 scope.go:117] "RemoveContainer" containerID="55b0b60c3bba6dda4c197e053a1481f781982e775595fcdcd13b3cb84da6967a" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.592217 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2a8a5d6d-4404-4848-a8b9-d47cee1e350d","Type":"ContainerDied","Data":"ea490dcf90950e7b3891033eb4128bd645aca732cd3dd683ab9f4f39301b15b6"} Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.592402 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.600696 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-llgqk" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.600712 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-llgqk" event={"ID":"7446dd9c-45ba-43bc-9160-5f39384e542a","Type":"ContainerDied","Data":"fbe849f6bcf755086c569e7e0d37a5f711bc862461aa52a8c95b01a5160bcd59"} Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.601269 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbe849f6bcf755086c569e7e0d37a5f711bc862461aa52a8c95b01a5160bcd59" Nov 28 17:22:29 crc kubenswrapper[5024]: E1128 17:22:29.602636 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-gsz7r" podUID="a2b6fe11-1216-4090-b1eb-fb7516bd0977" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.647565 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-22p46"] Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.662488 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-22p46"] Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.676080 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.689088 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.705255 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 17:22:29 crc kubenswrapper[5024]: E1128 17:22:29.706275 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7446dd9c-45ba-43bc-9160-5f39384e542a" containerName="neutron-db-sync" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.706303 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="7446dd9c-45ba-43bc-9160-5f39384e542a" containerName="neutron-db-sync" Nov 28 17:22:29 crc kubenswrapper[5024]: E1128 17:22:29.706328 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="config-reloader" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.706338 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="config-reloader" Nov 28 17:22:29 crc kubenswrapper[5024]: E1128 17:22:29.706359 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="prometheus" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.706366 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="prometheus" Nov 28 17:22:29 crc kubenswrapper[5024]: E1128 17:22:29.706383 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" containerName="dnsmasq-dns" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.706392 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" containerName="dnsmasq-dns" Nov 28 17:22:29 crc kubenswrapper[5024]: E1128 17:22:29.706407 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="init-config-reloader" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.706416 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="init-config-reloader" Nov 28 17:22:29 crc kubenswrapper[5024]: E1128 17:22:29.706434 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" containerName="init" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.706441 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" containerName="init" Nov 28 17:22:29 crc kubenswrapper[5024]: E1128 17:22:29.706458 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="thanos-sidecar" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.706467 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="thanos-sidecar" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.706771 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="thanos-sidecar" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.706788 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" containerName="dnsmasq-dns" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.706807 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="7446dd9c-45ba-43bc-9160-5f39384e542a" containerName="neutron-db-sync" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.706820 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="prometheus" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.706836 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" containerName="config-reloader" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.709375 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.711535 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.712830 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.713167 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.713341 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.714232 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.714478 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-hk4n6" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.720702 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.724968 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.844809 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.844898 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.844948 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/32f8d83a-8bc1-446c-a314-451f4abd915b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.844984 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.845107 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.845143 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-config\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.845177 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh9d2\" (UniqueName: \"kubernetes.io/projected/32f8d83a-8bc1-446c-a314-451f4abd915b-kube-api-access-mh9d2\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.845235 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/32f8d83a-8bc1-446c-a314-451f4abd915b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.845265 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.845297 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.847697 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/32f8d83a-8bc1-446c-a314-451f4abd915b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.949867 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.949945 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-config\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.949991 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh9d2\" (UniqueName: \"kubernetes.io/projected/32f8d83a-8bc1-446c-a314-451f4abd915b-kube-api-access-mh9d2\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.950071 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/32f8d83a-8bc1-446c-a314-451f4abd915b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.950101 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.950131 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.950157 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/32f8d83a-8bc1-446c-a314-451f4abd915b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.950248 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.950294 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.950330 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/32f8d83a-8bc1-446c-a314-451f4abd915b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.950355 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.951409 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.954059 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/32f8d83a-8bc1-446c-a314-451f4abd915b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.957040 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/32f8d83a-8bc1-446c-a314-451f4abd915b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.957405 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/32f8d83a-8bc1-446c-a314-451f4abd915b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.958524 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-config\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.958676 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.964849 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.966160 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.966605 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.968408 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/32f8d83a-8bc1-446c-a314-451f4abd915b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.969566 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh9d2\" (UniqueName: \"kubernetes.io/projected/32f8d83a-8bc1-446c-a314-451f4abd915b-kube-api-access-mh9d2\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:29 crc kubenswrapper[5024]: I1128 17:22:29.992330 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"32f8d83a-8bc1-446c-a314-451f4abd915b\") " pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.069165 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.200094 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-5g8qv"] Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.202380 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.210611 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-5g8qv"] Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.258677 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqlj8\" (UniqueName: \"kubernetes.io/projected/318e36e2-e4c8-4e51-a332-4434ae8d9e53-kube-api-access-bqlj8\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.258818 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.258861 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-config\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.258909 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.258964 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.258985 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.324484 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58c9d5dbb8-n2r5j"] Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.326821 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.329689 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.329952 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.330902 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.331623 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hs4gh" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.357808 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58c9d5dbb8-n2r5j"] Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.360500 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-ovndb-tls-certs\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.360594 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-httpd-config\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.360641 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.360698 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-config\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.360763 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.360793 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljgsq\" (UniqueName: \"kubernetes.io/projected/46253c13-9836-4929-8fdd-a2ce0060f149-kube-api-access-ljgsq\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.360854 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.360885 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.360933 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqlj8\" (UniqueName: \"kubernetes.io/projected/318e36e2-e4c8-4e51-a332-4434ae8d9e53-kube-api-access-bqlj8\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.360965 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-config\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.360986 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-combined-ca-bundle\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.361881 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.363097 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.363854 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-config\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.364625 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.365229 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.397738 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqlj8\" (UniqueName: \"kubernetes.io/projected/318e36e2-e4c8-4e51-a332-4434ae8d9e53-kube-api-access-bqlj8\") pod \"dnsmasq-dns-7d88d7b95f-5g8qv\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: E1128 17:22:30.449092 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 28 17:22:30 crc kubenswrapper[5024]: E1128 17:22:30.449282 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p74kh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-bkwj2_openstack(92cbe84b-cd7a-4f20-8aab-92fd90f0c939): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:22:30 crc kubenswrapper[5024]: E1128 17:22:30.450375 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-bkwj2" podUID="92cbe84b-cd7a-4f20-8aab-92fd90f0c939" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.462178 5024 scope.go:117] "RemoveContainer" containerID="9f94e54e9736c86811816900e1f0babcb022ad5a4be373abffea74a8f143c8d5" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.463172 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljgsq\" (UniqueName: \"kubernetes.io/projected/46253c13-9836-4929-8fdd-a2ce0060f149-kube-api-access-ljgsq\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.463298 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-config\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.463322 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-combined-ca-bundle\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.463385 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-ovndb-tls-certs\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.463435 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-httpd-config\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.467039 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-combined-ca-bundle\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.469947 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-ovndb-tls-certs\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.477934 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-httpd-config\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.478968 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-config\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.491395 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljgsq\" (UniqueName: \"kubernetes.io/projected/46253c13-9836-4929-8fdd-a2ce0060f149-kube-api-access-ljgsq\") pod \"neutron-58c9d5dbb8-n2r5j\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.531373 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.541297 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a8a5d6d-4404-4848-a8b9-d47cee1e350d" path="/var/lib/kubelet/pods/2a8a5d6d-4404-4848-a8b9-d47cee1e350d/volumes" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.557990 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" path="/var/lib/kubelet/pods/49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b/volumes" Nov 28 17:22:30 crc kubenswrapper[5024]: E1128 17:22:30.659108 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-bkwj2" podUID="92cbe84b-cd7a-4f20-8aab-92fd90f0c939" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.663724 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:30 crc kubenswrapper[5024]: I1128 17:22:30.899984 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-22p46" podUID="49e8e3e8-5ba4-4a0f-a1df-889e581a1d7b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Nov 28 17:22:31 crc kubenswrapper[5024]: I1128 17:22:31.041511 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l8dtc"] Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.259196 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7978574989-5r9v4"] Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.261139 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.263363 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.263490 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.282692 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7978574989-5r9v4"] Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.447187 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v952\" (UniqueName: \"kubernetes.io/projected/dce14449-21ac-4abd-9e71-13fa2a0c471b-kube-api-access-5v952\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.447296 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-internal-tls-certs\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.447352 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-ovndb-tls-certs\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.447420 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-public-tls-certs\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.447509 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-combined-ca-bundle\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.447556 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-httpd-config\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.447586 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-config\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.549058 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-public-tls-certs\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.549150 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-combined-ca-bundle\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.549189 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-httpd-config\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.549212 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-config\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.549318 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v952\" (UniqueName: \"kubernetes.io/projected/dce14449-21ac-4abd-9e71-13fa2a0c471b-kube-api-access-5v952\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.549360 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-internal-tls-certs\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.549409 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-ovndb-tls-certs\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.559846 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-internal-tls-certs\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.562564 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-public-tls-certs\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.565744 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-httpd-config\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.566144 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-combined-ca-bundle\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.576812 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-ovndb-tls-certs\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.590438 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dce14449-21ac-4abd-9e71-13fa2a0c471b-config\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.596349 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v952\" (UniqueName: \"kubernetes.io/projected/dce14449-21ac-4abd-9e71-13fa2a0c471b-kube-api-access-5v952\") pod \"neutron-7978574989-5r9v4\" (UID: \"dce14449-21ac-4abd-9e71-13fa2a0c471b\") " pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:33 crc kubenswrapper[5024]: I1128 17:22:33.886640 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:35 crc kubenswrapper[5024]: W1128 17:22:35.424264 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a0117fc_7c8f_485d_8e97_539af4f3046d.slice/crio-1ae59a1b1f68737c8b6d579beac01816772ccd411a992d5852ad61f30edbe906 WatchSource:0}: Error finding container 1ae59a1b1f68737c8b6d579beac01816772ccd411a992d5852ad61f30edbe906: Status 404 returned error can't find the container with id 1ae59a1b1f68737c8b6d579beac01816772ccd411a992d5852ad61f30edbe906 Nov 28 17:22:35 crc kubenswrapper[5024]: I1128 17:22:35.476716 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 17:22:35 crc kubenswrapper[5024]: I1128 17:22:35.499628 5024 scope.go:117] "RemoveContainer" containerID="245862d5ab5795b3c5ec4ec9a9edb68b77d53cfb13a489aef2a8bfa828a46942" Nov 28 17:22:35 crc kubenswrapper[5024]: I1128 17:22:35.718682 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l8dtc" event={"ID":"0a0117fc-7c8f-485d-8e97-539af4f3046d","Type":"ContainerStarted","Data":"1ae59a1b1f68737c8b6d579beac01816772ccd411a992d5852ad61f30edbe906"} Nov 28 17:22:35 crc kubenswrapper[5024]: I1128 17:22:35.758290 5024 scope.go:117] "RemoveContainer" containerID="1a8d14a1d59e13c8a36e1679d66c11a5f7760f922d105ae85d2a4091202a5931" Nov 28 17:22:35 crc kubenswrapper[5024]: I1128 17:22:35.820175 5024 scope.go:117] "RemoveContainer" containerID="667f6207b0846c2aedd8b1a421128da49a0c1dbb6193ff0200162c220dcea269" Nov 28 17:22:35 crc kubenswrapper[5024]: I1128 17:22:35.888446 5024 scope.go:117] "RemoveContainer" containerID="b395afa75b0ad17f7cdd1cbdf43f18a7de598ef4be44dc4db2bef1b45e1a42fc" Nov 28 17:22:35 crc kubenswrapper[5024]: I1128 17:22:35.992440 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 17:22:36 crc kubenswrapper[5024]: W1128 17:22:36.005268 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32f8d83a_8bc1_446c_a314_451f4abd915b.slice/crio-5b831b7a7432e566e8c79db4ed002b07b0fe6f35dd7424f981d8d5d4e7536669 WatchSource:0}: Error finding container 5b831b7a7432e566e8c79db4ed002b07b0fe6f35dd7424f981d8d5d4e7536669: Status 404 returned error can't find the container with id 5b831b7a7432e566e8c79db4ed002b07b0fe6f35dd7424f981d8d5d4e7536669 Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.108142 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-5g8qv"] Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.315390 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7978574989-5r9v4"] Nov 28 17:22:36 crc kubenswrapper[5024]: W1128 17:22:36.327585 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce14449_21ac_4abd_9e71_13fa2a0c471b.slice/crio-c006c6baa8aec668002c398fa8599441877a9f87852d8349c4e1283aa0fd8779 WatchSource:0}: Error finding container c006c6baa8aec668002c398fa8599441877a9f87852d8349c4e1283aa0fd8779: Status 404 returned error can't find the container with id c006c6baa8aec668002c398fa8599441877a9f87852d8349c4e1283aa0fd8779 Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.449294 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58c9d5dbb8-n2r5j"] Nov 28 17:22:36 crc kubenswrapper[5024]: W1128 17:22:36.460213 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46253c13_9836_4929_8fdd_a2ce0060f149.slice/crio-0424b20b779b999ed089cc6bacb152c510cbab384ec911a34e34326dc5bcd059 WatchSource:0}: Error finding container 0424b20b779b999ed089cc6bacb152c510cbab384ec911a34e34326dc5bcd059: Status 404 returned error can't find the container with id 0424b20b779b999ed089cc6bacb152c510cbab384ec911a34e34326dc5bcd059 Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.735035 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32f8d83a-8bc1-446c-a314-451f4abd915b","Type":"ContainerStarted","Data":"5b831b7a7432e566e8c79db4ed002b07b0fe6f35dd7424f981d8d5d4e7536669"} Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.742771 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c9d5dbb8-n2r5j" event={"ID":"46253c13-9836-4929-8fdd-a2ce0060f149","Type":"ContainerStarted","Data":"0424b20b779b999ed089cc6bacb152c510cbab384ec911a34e34326dc5bcd059"} Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.744172 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7978574989-5r9v4" event={"ID":"dce14449-21ac-4abd-9e71-13fa2a0c471b","Type":"ContainerStarted","Data":"c006c6baa8aec668002c398fa8599441877a9f87852d8349c4e1283aa0fd8779"} Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.746941 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a","Type":"ContainerStarted","Data":"90a985202a1ef81023bc9287d63a905b7aa57476be0f6d055af451275a6f8b50"} Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.755081 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l8dtc" event={"ID":"0a0117fc-7c8f-485d-8e97-539af4f3046d","Type":"ContainerStarted","Data":"52b8c3267caabff5e0e3c87808dbf2e46ed2d0aefecfad782c8fceb9de009672"} Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.761391 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tgknw" event={"ID":"da020492-bf03-4191-aa2b-e335ac55f7b3","Type":"ContainerStarted","Data":"5451eaf0bd3116c15054f998cb71f4b5d9f0d39a9396c60ce88d12f529bf4a52"} Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.765323 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8bjh2" event={"ID":"914b00e1-817d-4776-ae89-1c824e7410bd","Type":"ContainerStarted","Data":"a5fea2759b5f5bce75972ef521aac04466f537832c858e9bee5b8c12be7120b4"} Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.785688 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l8dtc" podStartSLOduration=25.785660779 podStartE2EDuration="25.785660779s" podCreationTimestamp="2025-11-28 17:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:22:36.774832152 +0000 UTC m=+1458.823753057" watchObservedRunningTime="2025-11-28 17:22:36.785660779 +0000 UTC m=+1458.834581704" Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.805963 5024 generic.go:334] "Generic (PLEG): container finished" podID="318e36e2-e4c8-4e51-a332-4434ae8d9e53" containerID="c8f44bb5fbbc26e1201cf4d1f74233b70652e00077f3df236d7b2fe3c596d098" exitCode=0 Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.806368 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" event={"ID":"318e36e2-e4c8-4e51-a332-4434ae8d9e53","Type":"ContainerDied","Data":"c8f44bb5fbbc26e1201cf4d1f74233b70652e00077f3df236d7b2fe3c596d098"} Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.806585 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" event={"ID":"318e36e2-e4c8-4e51-a332-4434ae8d9e53","Type":"ContainerStarted","Data":"0a3ade2e072f0ebf71ab64a5cd069ab91a8b07e5ad4e563e4eaf06cf1d9fca46"} Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.853149 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tgknw" podStartSLOduration=3.511569132 podStartE2EDuration="38.853125133s" podCreationTimestamp="2025-11-28 17:21:58 +0000 UTC" firstStartedPulling="2025-11-28 17:22:00.348354602 +0000 UTC m=+1422.397275507" lastFinishedPulling="2025-11-28 17:22:35.689910603 +0000 UTC m=+1457.738831508" observedRunningTime="2025-11-28 17:22:36.805672657 +0000 UTC m=+1458.854593562" watchObservedRunningTime="2025-11-28 17:22:36.853125133 +0000 UTC m=+1458.902046038" Nov 28 17:22:36 crc kubenswrapper[5024]: I1128 17:22:36.873136 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-8bjh2" podStartSLOduration=8.863368542 podStartE2EDuration="38.87311306s" podCreationTimestamp="2025-11-28 17:21:58 +0000 UTC" firstStartedPulling="2025-11-28 17:22:00.400042832 +0000 UTC m=+1422.448963737" lastFinishedPulling="2025-11-28 17:22:30.40978735 +0000 UTC m=+1452.458708255" observedRunningTime="2025-11-28 17:22:36.83328959 +0000 UTC m=+1458.882210495" watchObservedRunningTime="2025-11-28 17:22:36.87311306 +0000 UTC m=+1458.922033965" Nov 28 17:22:37 crc kubenswrapper[5024]: I1128 17:22:37.858891 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7978574989-5r9v4" event={"ID":"dce14449-21ac-4abd-9e71-13fa2a0c471b","Type":"ContainerStarted","Data":"b220873ff769782984a2ceb1c94cbcac74496e171ceb0d995b9893d00581c014"} Nov 28 17:22:37 crc kubenswrapper[5024]: I1128 17:22:37.869668 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c9d5dbb8-n2r5j" event={"ID":"46253c13-9836-4929-8fdd-a2ce0060f149","Type":"ContainerStarted","Data":"dcbfbe7a9970714e3d892d8691856819bd96023cf5f79397311ebc29b3997dfe"} Nov 28 17:22:37 crc kubenswrapper[5024]: I1128 17:22:37.873152 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ppx6b" event={"ID":"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6","Type":"ContainerStarted","Data":"be0b1636858f531c9152dae25d7e3f478603251ec2aa68ea14b1d021b63cb264"} Nov 28 17:22:37 crc kubenswrapper[5024]: I1128 17:22:37.891313 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-ppx6b" podStartSLOduration=9.158319246 podStartE2EDuration="1m10.891295767s" podCreationTimestamp="2025-11-28 17:21:27 +0000 UTC" firstStartedPulling="2025-11-28 17:21:28.722566467 +0000 UTC m=+1390.771487372" lastFinishedPulling="2025-11-28 17:22:30.455542988 +0000 UTC m=+1452.504463893" observedRunningTime="2025-11-28 17:22:37.890385302 +0000 UTC m=+1459.939306207" watchObservedRunningTime="2025-11-28 17:22:37.891295767 +0000 UTC m=+1459.940216672" Nov 28 17:22:38 crc kubenswrapper[5024]: I1128 17:22:38.888312 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c9d5dbb8-n2r5j" event={"ID":"46253c13-9836-4929-8fdd-a2ce0060f149","Type":"ContainerStarted","Data":"1f59b3a535dd27a947e5f56189231f63b538e89df8e7fc281f3c96a88fbab74c"} Nov 28 17:22:38 crc kubenswrapper[5024]: I1128 17:22:38.888817 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:22:38 crc kubenswrapper[5024]: I1128 17:22:38.891403 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7978574989-5r9v4" event={"ID":"dce14449-21ac-4abd-9e71-13fa2a0c471b","Type":"ContainerStarted","Data":"1567ef02ea2239e1acaeb2f5625a57609a29019e5b50d7b812da5eabd3a4b685"} Nov 28 17:22:38 crc kubenswrapper[5024]: I1128 17:22:38.891785 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:22:38 crc kubenswrapper[5024]: I1128 17:22:38.894944 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" event={"ID":"318e36e2-e4c8-4e51-a332-4434ae8d9e53","Type":"ContainerStarted","Data":"26b55ed6b472f46dadc27a41f7897a3bfabdcb858aae708ec8276af5a3ccf7b8"} Nov 28 17:22:38 crc kubenswrapper[5024]: I1128 17:22:38.895143 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:38 crc kubenswrapper[5024]: I1128 17:22:38.916229 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-58c9d5dbb8-n2r5j" podStartSLOduration=8.916208706 podStartE2EDuration="8.916208706s" podCreationTimestamp="2025-11-28 17:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:22:38.910472913 +0000 UTC m=+1460.959393808" watchObservedRunningTime="2025-11-28 17:22:38.916208706 +0000 UTC m=+1460.965129611" Nov 28 17:22:38 crc kubenswrapper[5024]: I1128 17:22:38.994423 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7978574989-5r9v4" podStartSLOduration=5.9943994929999995 podStartE2EDuration="5.994399493s" podCreationTimestamp="2025-11-28 17:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:22:38.983643648 +0000 UTC m=+1461.032564563" watchObservedRunningTime="2025-11-28 17:22:38.994399493 +0000 UTC m=+1461.043320408" Nov 28 17:22:38 crc kubenswrapper[5024]: I1128 17:22:38.999219 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" podStartSLOduration=8.999197699 podStartE2EDuration="8.999197699s" podCreationTimestamp="2025-11-28 17:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:22:38.958772223 +0000 UTC m=+1461.007693118" watchObservedRunningTime="2025-11-28 17:22:38.999197699 +0000 UTC m=+1461.048118604" Nov 28 17:22:40 crc kubenswrapper[5024]: I1128 17:22:40.915699 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32f8d83a-8bc1-446c-a314-451f4abd915b","Type":"ContainerStarted","Data":"d48bbc95fbc085e6936e797fc11eaa07a2d14dab6c156f04d98f8d0ed2eaa753"} Nov 28 17:22:41 crc kubenswrapper[5024]: I1128 17:22:41.934916 5024 generic.go:334] "Generic (PLEG): container finished" podID="da020492-bf03-4191-aa2b-e335ac55f7b3" containerID="5451eaf0bd3116c15054f998cb71f4b5d9f0d39a9396c60ce88d12f529bf4a52" exitCode=0 Nov 28 17:22:41 crc kubenswrapper[5024]: I1128 17:22:41.934954 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tgknw" event={"ID":"da020492-bf03-4191-aa2b-e335ac55f7b3","Type":"ContainerDied","Data":"5451eaf0bd3116c15054f998cb71f4b5d9f0d39a9396c60ce88d12f529bf4a52"} Nov 28 17:22:41 crc kubenswrapper[5024]: I1128 17:22:41.939695 5024 generic.go:334] "Generic (PLEG): container finished" podID="0a0117fc-7c8f-485d-8e97-539af4f3046d" containerID="52b8c3267caabff5e0e3c87808dbf2e46ed2d0aefecfad782c8fceb9de009672" exitCode=0 Nov 28 17:22:41 crc kubenswrapper[5024]: I1128 17:22:41.940856 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l8dtc" event={"ID":"0a0117fc-7c8f-485d-8e97-539af4f3046d","Type":"ContainerDied","Data":"52b8c3267caabff5e0e3c87808dbf2e46ed2d0aefecfad782c8fceb9de009672"} Nov 28 17:22:42 crc kubenswrapper[5024]: I1128 17:22:42.953596 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a","Type":"ContainerStarted","Data":"37b1eac94f87cd8b74ea3388c99c54b884a808bc0fbfc8bcea31519e88d93391"} Nov 28 17:22:42 crc kubenswrapper[5024]: I1128 17:22:42.955520 5024 generic.go:334] "Generic (PLEG): container finished" podID="914b00e1-817d-4776-ae89-1c824e7410bd" containerID="a5fea2759b5f5bce75972ef521aac04466f537832c858e9bee5b8c12be7120b4" exitCode=0 Nov 28 17:22:42 crc kubenswrapper[5024]: I1128 17:22:42.955598 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8bjh2" event={"ID":"914b00e1-817d-4776-ae89-1c824e7410bd","Type":"ContainerDied","Data":"a5fea2759b5f5bce75972ef521aac04466f537832c858e9bee5b8c12be7120b4"} Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.390984 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.510625 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-config-data\") pod \"0a0117fc-7c8f-485d-8e97-539af4f3046d\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.528706 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-combined-ca-bundle\") pod \"0a0117fc-7c8f-485d-8e97-539af4f3046d\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.528821 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-fernet-keys\") pod \"0a0117fc-7c8f-485d-8e97-539af4f3046d\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.528915 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cjmv\" (UniqueName: \"kubernetes.io/projected/0a0117fc-7c8f-485d-8e97-539af4f3046d-kube-api-access-9cjmv\") pod \"0a0117fc-7c8f-485d-8e97-539af4f3046d\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.528974 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-credential-keys\") pod \"0a0117fc-7c8f-485d-8e97-539af4f3046d\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.529044 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-scripts\") pod \"0a0117fc-7c8f-485d-8e97-539af4f3046d\" (UID: \"0a0117fc-7c8f-485d-8e97-539af4f3046d\") " Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.536178 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0117fc-7c8f-485d-8e97-539af4f3046d-kube-api-access-9cjmv" (OuterVolumeSpecName: "kube-api-access-9cjmv") pod "0a0117fc-7c8f-485d-8e97-539af4f3046d" (UID: "0a0117fc-7c8f-485d-8e97-539af4f3046d"). InnerVolumeSpecName "kube-api-access-9cjmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.536510 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0a0117fc-7c8f-485d-8e97-539af4f3046d" (UID: "0a0117fc-7c8f-485d-8e97-539af4f3046d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.542176 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0a0117fc-7c8f-485d-8e97-539af4f3046d" (UID: "0a0117fc-7c8f-485d-8e97-539af4f3046d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.555299 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-scripts" (OuterVolumeSpecName: "scripts") pod "0a0117fc-7c8f-485d-8e97-539af4f3046d" (UID: "0a0117fc-7c8f-485d-8e97-539af4f3046d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.555410 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-config-data" (OuterVolumeSpecName: "config-data") pod "0a0117fc-7c8f-485d-8e97-539af4f3046d" (UID: "0a0117fc-7c8f-485d-8e97-539af4f3046d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.571219 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a0117fc-7c8f-485d-8e97-539af4f3046d" (UID: "0a0117fc-7c8f-485d-8e97-539af4f3046d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.633659 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.634781 5024 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.634801 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cjmv\" (UniqueName: \"kubernetes.io/projected/0a0117fc-7c8f-485d-8e97-539af4f3046d-kube-api-access-9cjmv\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.634811 5024 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.634820 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.634829 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0117fc-7c8f-485d-8e97-539af4f3046d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.658435 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tgknw" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.736504 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-config-data\") pod \"da020492-bf03-4191-aa2b-e335ac55f7b3\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.736833 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqpr9\" (UniqueName: \"kubernetes.io/projected/da020492-bf03-4191-aa2b-e335ac55f7b3-kube-api-access-vqpr9\") pod \"da020492-bf03-4191-aa2b-e335ac55f7b3\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.736873 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da020492-bf03-4191-aa2b-e335ac55f7b3-logs\") pod \"da020492-bf03-4191-aa2b-e335ac55f7b3\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.736922 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-combined-ca-bundle\") pod \"da020492-bf03-4191-aa2b-e335ac55f7b3\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.736980 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-scripts\") pod \"da020492-bf03-4191-aa2b-e335ac55f7b3\" (UID: \"da020492-bf03-4191-aa2b-e335ac55f7b3\") " Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.737292 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da020492-bf03-4191-aa2b-e335ac55f7b3-logs" (OuterVolumeSpecName: "logs") pod "da020492-bf03-4191-aa2b-e335ac55f7b3" (UID: "da020492-bf03-4191-aa2b-e335ac55f7b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.737887 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da020492-bf03-4191-aa2b-e335ac55f7b3-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.740811 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-scripts" (OuterVolumeSpecName: "scripts") pod "da020492-bf03-4191-aa2b-e335ac55f7b3" (UID: "da020492-bf03-4191-aa2b-e335ac55f7b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.741628 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da020492-bf03-4191-aa2b-e335ac55f7b3-kube-api-access-vqpr9" (OuterVolumeSpecName: "kube-api-access-vqpr9") pod "da020492-bf03-4191-aa2b-e335ac55f7b3" (UID: "da020492-bf03-4191-aa2b-e335ac55f7b3"). InnerVolumeSpecName "kube-api-access-vqpr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.766710 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-config-data" (OuterVolumeSpecName: "config-data") pod "da020492-bf03-4191-aa2b-e335ac55f7b3" (UID: "da020492-bf03-4191-aa2b-e335ac55f7b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.776654 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da020492-bf03-4191-aa2b-e335ac55f7b3" (UID: "da020492-bf03-4191-aa2b-e335ac55f7b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.840280 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqpr9\" (UniqueName: \"kubernetes.io/projected/da020492-bf03-4191-aa2b-e335ac55f7b3-kube-api-access-vqpr9\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.840322 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.840335 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.840348 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da020492-bf03-4191-aa2b-e335ac55f7b3-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.965808 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gsz7r" event={"ID":"a2b6fe11-1216-4090-b1eb-fb7516bd0977","Type":"ContainerStarted","Data":"ce2e278d3f1707f10d9ad89dabc644167a10172820b7b2bcd7269601353f016a"} Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.968943 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l8dtc" event={"ID":"0a0117fc-7c8f-485d-8e97-539af4f3046d","Type":"ContainerDied","Data":"1ae59a1b1f68737c8b6d579beac01816772ccd411a992d5852ad61f30edbe906"} Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.968970 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ae59a1b1f68737c8b6d579beac01816772ccd411a992d5852ad61f30edbe906" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.969010 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l8dtc" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.977158 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tgknw" Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.977172 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tgknw" event={"ID":"da020492-bf03-4191-aa2b-e335ac55f7b3","Type":"ContainerDied","Data":"f52c54c917e6c6b7b685d222354b146461c970587bb83a8baf05bf9476d1cf28"} Nov 28 17:22:43 crc kubenswrapper[5024]: I1128 17:22:43.977272 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f52c54c917e6c6b7b685d222354b146461c970587bb83a8baf05bf9476d1cf28" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.007082 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-gsz7r" podStartSLOduration=2.847895817 podStartE2EDuration="46.0070647s" podCreationTimestamp="2025-11-28 17:21:58 +0000 UTC" firstStartedPulling="2025-11-28 17:21:59.875411066 +0000 UTC m=+1421.924331971" lastFinishedPulling="2025-11-28 17:22:43.034579949 +0000 UTC m=+1465.083500854" observedRunningTime="2025-11-28 17:22:43.998516978 +0000 UTC m=+1466.047437883" watchObservedRunningTime="2025-11-28 17:22:44.0070647 +0000 UTC m=+1466.055985605" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.152642 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5dc99dc88d-6bdv9"] Nov 28 17:22:44 crc kubenswrapper[5024]: E1128 17:22:44.153138 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0117fc-7c8f-485d-8e97-539af4f3046d" containerName="keystone-bootstrap" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.153152 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0117fc-7c8f-485d-8e97-539af4f3046d" containerName="keystone-bootstrap" Nov 28 17:22:44 crc kubenswrapper[5024]: E1128 17:22:44.153177 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da020492-bf03-4191-aa2b-e335ac55f7b3" containerName="placement-db-sync" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.153185 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="da020492-bf03-4191-aa2b-e335ac55f7b3" containerName="placement-db-sync" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.153392 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0117fc-7c8f-485d-8e97-539af4f3046d" containerName="keystone-bootstrap" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.153408 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="da020492-bf03-4191-aa2b-e335ac55f7b3" containerName="placement-db-sync" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.154924 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.160326 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.161778 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.161949 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-62wk2" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.162080 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.162190 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.207430 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5dc99dc88d-6bdv9"] Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.230119 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-54f6ccfc5c-rvfhm"] Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.231674 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.242471 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.242710 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.248386 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7sbwz" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.250144 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.250409 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.250435 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.254884 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-internal-tls-certs\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.255007 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-combined-ca-bundle\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.255135 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ae62fc-6645-4656-a1e9-9fcedf478bd9-logs\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.255207 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-scripts\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.255235 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-config-data\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.255355 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-public-tls-certs\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.255418 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7sh5\" (UniqueName: \"kubernetes.io/projected/94ae62fc-6645-4656-a1e9-9fcedf478bd9-kube-api-access-t7sh5\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.272660 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54f6ccfc5c-rvfhm"] Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.358765 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-internal-tls-certs\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.358852 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fls89\" (UniqueName: \"kubernetes.io/projected/8f65338e-2617-4a88-91ff-3f13acb313bc-kube-api-access-fls89\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.358900 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-combined-ca-bundle\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.358959 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-config-data\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.359037 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ae62fc-6645-4656-a1e9-9fcedf478bd9-logs\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.359092 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-credential-keys\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.359139 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-scripts\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.359163 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-config-data\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.359203 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-public-tls-certs\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.359254 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-internal-tls-certs\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.359342 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-combined-ca-bundle\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.359383 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-public-tls-certs\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.359424 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-scripts\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.359458 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7sh5\" (UniqueName: \"kubernetes.io/projected/94ae62fc-6645-4656-a1e9-9fcedf478bd9-kube-api-access-t7sh5\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.359504 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-fernet-keys\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.363243 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ae62fc-6645-4656-a1e9-9fcedf478bd9-logs\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.366871 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-internal-tls-certs\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.374489 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-config-data\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.382882 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-combined-ca-bundle\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.405322 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-scripts\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.405825 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94ae62fc-6645-4656-a1e9-9fcedf478bd9-public-tls-certs\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.414712 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7sh5\" (UniqueName: \"kubernetes.io/projected/94ae62fc-6645-4656-a1e9-9fcedf478bd9-kube-api-access-t7sh5\") pod \"placement-5dc99dc88d-6bdv9\" (UID: \"94ae62fc-6645-4656-a1e9-9fcedf478bd9\") " pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.461221 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-scripts\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.461292 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-fernet-keys\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.461350 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fls89\" (UniqueName: \"kubernetes.io/projected/8f65338e-2617-4a88-91ff-3f13acb313bc-kube-api-access-fls89\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.461397 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-config-data\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.461450 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-credential-keys\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.461494 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-public-tls-certs\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.461524 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-internal-tls-certs\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.461550 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-combined-ca-bundle\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.493688 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-combined-ca-bundle\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.493867 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-fernet-keys\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.501489 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-scripts\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.504982 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-config-data\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.513672 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.514655 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-internal-tls-certs\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.524701 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fls89\" (UniqueName: \"kubernetes.io/projected/8f65338e-2617-4a88-91ff-3f13acb313bc-kube-api-access-fls89\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.525166 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-credential-keys\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.525532 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f65338e-2617-4a88-91ff-3f13acb313bc-public-tls-certs\") pod \"keystone-54f6ccfc5c-rvfhm\" (UID: \"8f65338e-2617-4a88-91ff-3f13acb313bc\") " pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.579432 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.786512 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8bjh2" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.875115 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72qwf\" (UniqueName: \"kubernetes.io/projected/914b00e1-817d-4776-ae89-1c824e7410bd-kube-api-access-72qwf\") pod \"914b00e1-817d-4776-ae89-1c824e7410bd\" (UID: \"914b00e1-817d-4776-ae89-1c824e7410bd\") " Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.875373 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/914b00e1-817d-4776-ae89-1c824e7410bd-db-sync-config-data\") pod \"914b00e1-817d-4776-ae89-1c824e7410bd\" (UID: \"914b00e1-817d-4776-ae89-1c824e7410bd\") " Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.875444 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914b00e1-817d-4776-ae89-1c824e7410bd-combined-ca-bundle\") pod \"914b00e1-817d-4776-ae89-1c824e7410bd\" (UID: \"914b00e1-817d-4776-ae89-1c824e7410bd\") " Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.892334 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914b00e1-817d-4776-ae89-1c824e7410bd-kube-api-access-72qwf" (OuterVolumeSpecName: "kube-api-access-72qwf") pod "914b00e1-817d-4776-ae89-1c824e7410bd" (UID: "914b00e1-817d-4776-ae89-1c824e7410bd"). InnerVolumeSpecName "kube-api-access-72qwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.912238 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914b00e1-817d-4776-ae89-1c824e7410bd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "914b00e1-817d-4776-ae89-1c824e7410bd" (UID: "914b00e1-817d-4776-ae89-1c824e7410bd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.945172 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914b00e1-817d-4776-ae89-1c824e7410bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "914b00e1-817d-4776-ae89-1c824e7410bd" (UID: "914b00e1-817d-4776-ae89-1c824e7410bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.980747 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72qwf\" (UniqueName: \"kubernetes.io/projected/914b00e1-817d-4776-ae89-1c824e7410bd-kube-api-access-72qwf\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.980784 5024 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/914b00e1-817d-4776-ae89-1c824e7410bd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:44 crc kubenswrapper[5024]: I1128 17:22:44.980794 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914b00e1-817d-4776-ae89-1c824e7410bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.062957 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8bjh2" event={"ID":"914b00e1-817d-4776-ae89-1c824e7410bd","Type":"ContainerDied","Data":"23fa7ede0cec448beaa43d4fc38a7ca8a769d8b882e33a59997195dca7baac65"} Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.062999 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23fa7ede0cec448beaa43d4fc38a7ca8a769d8b882e33a59997195dca7baac65" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.063110 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8bjh2" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.230853 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5dc99dc88d-6bdv9"] Nov 28 17:22:45 crc kubenswrapper[5024]: W1128 17:22:45.298262 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94ae62fc_6645_4656_a1e9_9fcedf478bd9.slice/crio-f596df17edbbb6721d26dd9cf6755d20aa82239cda3868eafa68f1512e2a5d5b WatchSource:0}: Error finding container f596df17edbbb6721d26dd9cf6755d20aa82239cda3868eafa68f1512e2a5d5b: Status 404 returned error can't find the container with id f596df17edbbb6721d26dd9cf6755d20aa82239cda3868eafa68f1512e2a5d5b Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.401942 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6d88cbb66c-lp6ws"] Nov 28 17:22:45 crc kubenswrapper[5024]: E1128 17:22:45.402637 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914b00e1-817d-4776-ae89-1c824e7410bd" containerName="barbican-db-sync" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.402658 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="914b00e1-817d-4776-ae89-1c824e7410bd" containerName="barbican-db-sync" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.402867 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="914b00e1-817d-4776-ae89-1c824e7410bd" containerName="barbican-db-sync" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.404493 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.416047 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.416335 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tptfj" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.416462 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.518637 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a957805d-e8d1-45ac-890f-23ae1e98516a-config-data\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.519142 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzv2p\" (UniqueName: \"kubernetes.io/projected/a957805d-e8d1-45ac-890f-23ae1e98516a-kube-api-access-pzv2p\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.519293 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a957805d-e8d1-45ac-890f-23ae1e98516a-logs\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.519513 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a957805d-e8d1-45ac-890f-23ae1e98516a-combined-ca-bundle\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.519671 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a957805d-e8d1-45ac-890f-23ae1e98516a-config-data-custom\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.537727 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.546251 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6d88cbb66c-lp6ws"] Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.608166 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54f6ccfc5c-rvfhm"] Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.623155 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a957805d-e8d1-45ac-890f-23ae1e98516a-logs\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.623291 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a957805d-e8d1-45ac-890f-23ae1e98516a-combined-ca-bundle\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.623373 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a957805d-e8d1-45ac-890f-23ae1e98516a-config-data-custom\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.623470 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a957805d-e8d1-45ac-890f-23ae1e98516a-config-data\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.623542 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzv2p\" (UniqueName: \"kubernetes.io/projected/a957805d-e8d1-45ac-890f-23ae1e98516a-kube-api-access-pzv2p\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.626192 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a957805d-e8d1-45ac-890f-23ae1e98516a-logs\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.629043 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6bd9bb486-bbh5j"] Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.642777 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.645284 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.664455 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a957805d-e8d1-45ac-890f-23ae1e98516a-config-data-custom\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.678273 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a957805d-e8d1-45ac-890f-23ae1e98516a-config-data\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.678623 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a957805d-e8d1-45ac-890f-23ae1e98516a-combined-ca-bundle\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.681667 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzv2p\" (UniqueName: \"kubernetes.io/projected/a957805d-e8d1-45ac-890f-23ae1e98516a-kube-api-access-pzv2p\") pod \"barbican-worker-6d88cbb66c-lp6ws\" (UID: \"a957805d-e8d1-45ac-890f-23ae1e98516a\") " pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.710981 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d88cbb66c-lp6ws" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.725644 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6bd9bb486-bbh5j"] Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.727158 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kb6b\" (UniqueName: \"kubernetes.io/projected/bd2c11b3-5ebf-4225-9082-40859af5a480-kube-api-access-8kb6b\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.727245 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2c11b3-5ebf-4225-9082-40859af5a480-combined-ca-bundle\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.727307 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd2c11b3-5ebf-4225-9082-40859af5a480-config-data-custom\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.727332 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd2c11b3-5ebf-4225-9082-40859af5a480-logs\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.727374 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd2c11b3-5ebf-4225-9082-40859af5a480-config-data\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.737836 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-5g8qv"] Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.748949 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ff8449c8c-r68zx"] Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.751505 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.780382 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ff8449c8c-r68zx"] Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.816746 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7ddf475b78-4qwq7"] Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.820386 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.827103 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.829924 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kb6b\" (UniqueName: \"kubernetes.io/projected/bd2c11b3-5ebf-4225-9082-40859af5a480-kube-api-access-8kb6b\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.830330 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2c11b3-5ebf-4225-9082-40859af5a480-combined-ca-bundle\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.830635 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd2c11b3-5ebf-4225-9082-40859af5a480-config-data-custom\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.830873 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd2c11b3-5ebf-4225-9082-40859af5a480-logs\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.832279 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48mv\" (UniqueName: \"kubernetes.io/projected/c2845fcb-6cd4-46e4-b335-e319078d7ae8-kube-api-access-d48mv\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.832901 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd2c11b3-5ebf-4225-9082-40859af5a480-config-data\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.833268 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-config\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.836719 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.837523 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-dns-svc\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.833712 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd2c11b3-5ebf-4225-9082-40859af5a480-logs\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.838254 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.838500 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.857050 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd2c11b3-5ebf-4225-9082-40859af5a480-config-data-custom\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.857384 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd2c11b3-5ebf-4225-9082-40859af5a480-config-data\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.858048 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2c11b3-5ebf-4225-9082-40859af5a480-combined-ca-bundle\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.875095 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kb6b\" (UniqueName: \"kubernetes.io/projected/bd2c11b3-5ebf-4225-9082-40859af5a480-kube-api-access-8kb6b\") pod \"barbican-keystone-listener-6bd9bb486-bbh5j\" (UID: \"bd2c11b3-5ebf-4225-9082-40859af5a480\") " pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.876596 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7ddf475b78-4qwq7"] Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.948640 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr75h\" (UniqueName: \"kubernetes.io/projected/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-kube-api-access-hr75h\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.948746 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d48mv\" (UniqueName: \"kubernetes.io/projected/c2845fcb-6cd4-46e4-b335-e319078d7ae8-kube-api-access-d48mv\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.948803 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-config\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.948865 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.948922 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-dns-svc\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.948979 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-logs\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.949006 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-config-data-custom\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.949056 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-combined-ca-bundle\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.949094 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.949121 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.949273 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-config-data\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.951405 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-config\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.951833 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.952160 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.952490 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-dns-svc\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.953583 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:45 crc kubenswrapper[5024]: I1128 17:22:45.988852 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d48mv\" (UniqueName: \"kubernetes.io/projected/c2845fcb-6cd4-46e4-b335-e319078d7ae8-kube-api-access-d48mv\") pod \"dnsmasq-dns-5ff8449c8c-r68zx\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.052260 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-config-data\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.052418 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr75h\" (UniqueName: \"kubernetes.io/projected/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-kube-api-access-hr75h\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.052530 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-logs\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.052558 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-config-data-custom\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.052587 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-combined-ca-bundle\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.053677 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-logs\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.061930 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-combined-ca-bundle\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.062330 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-config-data\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.064786 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-config-data-custom\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.076833 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr75h\" (UniqueName: \"kubernetes.io/projected/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-kube-api-access-hr75h\") pod \"barbican-api-7ddf475b78-4qwq7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.077359 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.097678 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54f6ccfc5c-rvfhm" event={"ID":"8f65338e-2617-4a88-91ff-3f13acb313bc","Type":"ContainerStarted","Data":"868f31e6e6c00f5197f9ce194513dc160f015e3968f141c376536a72f8df840a"} Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.105260 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dc99dc88d-6bdv9" event={"ID":"94ae62fc-6645-4656-a1e9-9fcedf478bd9","Type":"ContainerStarted","Data":"d6369e2c956b30ba96c359bbcfbb5afb9aa76a8fc71621857e960110d8c812b8"} Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.105314 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dc99dc88d-6bdv9" event={"ID":"94ae62fc-6645-4656-a1e9-9fcedf478bd9","Type":"ContainerStarted","Data":"f596df17edbbb6721d26dd9cf6755d20aa82239cda3868eafa68f1512e2a5d5b"} Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.108500 5024 generic.go:334] "Generic (PLEG): container finished" podID="32f8d83a-8bc1-446c-a314-451f4abd915b" containerID="d48bbc95fbc085e6936e797fc11eaa07a2d14dab6c156f04d98f8d0ed2eaa753" exitCode=0 Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.108570 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32f8d83a-8bc1-446c-a314-451f4abd915b","Type":"ContainerDied","Data":"d48bbc95fbc085e6936e797fc11eaa07a2d14dab6c156f04d98f8d0ed2eaa753"} Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.116223 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" podUID="318e36e2-e4c8-4e51-a332-4434ae8d9e53" containerName="dnsmasq-dns" containerID="cri-o://26b55ed6b472f46dadc27a41f7897a3bfabdcb858aae708ec8276af5a3ccf7b8" gracePeriod=10 Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.116424 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bkwj2" event={"ID":"92cbe84b-cd7a-4f20-8aab-92fd90f0c939","Type":"ContainerStarted","Data":"16ddd04424ccdaf052f15899fd9579c200e2dc5ef6bb7c9a3b36fade3093d5dd"} Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.119869 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.180821 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-bkwj2" podStartSLOduration=4.355737062 podStartE2EDuration="48.180803821s" podCreationTimestamp="2025-11-28 17:21:58 +0000 UTC" firstStartedPulling="2025-11-28 17:22:00.423065998 +0000 UTC m=+1422.471986903" lastFinishedPulling="2025-11-28 17:22:44.248132757 +0000 UTC m=+1466.297053662" observedRunningTime="2025-11-28 17:22:46.178403613 +0000 UTC m=+1468.227324518" watchObservedRunningTime="2025-11-28 17:22:46.180803821 +0000 UTC m=+1468.229724726" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.207765 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.654425 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6d88cbb66c-lp6ws"] Nov 28 17:22:46 crc kubenswrapper[5024]: I1128 17:22:46.778879 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6bd9bb486-bbh5j"] Nov 28 17:22:46 crc kubenswrapper[5024]: W1128 17:22:46.807915 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd2c11b3_5ebf_4225_9082_40859af5a480.slice/crio-11978f00b60a86b19846e53f9a729e630bd3c343868c8ad4254da57a84112385 WatchSource:0}: Error finding container 11978f00b60a86b19846e53f9a729e630bd3c343868c8ad4254da57a84112385: Status 404 returned error can't find the container with id 11978f00b60a86b19846e53f9a729e630bd3c343868c8ad4254da57a84112385 Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.153927 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.175628 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7ddf475b78-4qwq7"] Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.178539 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32f8d83a-8bc1-446c-a314-451f4abd915b","Type":"ContainerStarted","Data":"aceea6656945553304fc01c01aebfa785c9be7076195d641ee22cd58773a1893"} Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.206787 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54f6ccfc5c-rvfhm" event={"ID":"8f65338e-2617-4a88-91ff-3f13acb313bc","Type":"ContainerStarted","Data":"b79fe2b5abec94e8f555742e16a9356b94c7a0b5675f8659902f55aa6f9e37aa"} Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.208168 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.230678 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ff8449c8c-r68zx"] Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.231997 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dc99dc88d-6bdv9" event={"ID":"94ae62fc-6645-4656-a1e9-9fcedf478bd9","Type":"ContainerStarted","Data":"34cc23e17c2082d5e37852a731c33053eb4dda02446172fce62dab8ea907f59f"} Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.232750 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.232783 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.251753 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d88cbb66c-lp6ws" event={"ID":"a957805d-e8d1-45ac-890f-23ae1e98516a","Type":"ContainerStarted","Data":"7657f692180035c5a19c384474fc9d44655a2e67d37a3d42f89618bc1ef7885d"} Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.264457 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" event={"ID":"bd2c11b3-5ebf-4225-9082-40859af5a480","Type":"ContainerStarted","Data":"11978f00b60a86b19846e53f9a729e630bd3c343868c8ad4254da57a84112385"} Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.269457 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-54f6ccfc5c-rvfhm" podStartSLOduration=3.269433656 podStartE2EDuration="3.269433656s" podCreationTimestamp="2025-11-28 17:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:22:47.231663535 +0000 UTC m=+1469.280584450" watchObservedRunningTime="2025-11-28 17:22:47.269433656 +0000 UTC m=+1469.318354561" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.276994 5024 generic.go:334] "Generic (PLEG): container finished" podID="318e36e2-e4c8-4e51-a332-4434ae8d9e53" containerID="26b55ed6b472f46dadc27a41f7897a3bfabdcb858aae708ec8276af5a3ccf7b8" exitCode=0 Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.277058 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" event={"ID":"318e36e2-e4c8-4e51-a332-4434ae8d9e53","Type":"ContainerDied","Data":"26b55ed6b472f46dadc27a41f7897a3bfabdcb858aae708ec8276af5a3ccf7b8"} Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.277087 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" event={"ID":"318e36e2-e4c8-4e51-a332-4434ae8d9e53","Type":"ContainerDied","Data":"0a3ade2e072f0ebf71ab64a5cd069ab91a8b07e5ad4e563e4eaf06cf1d9fca46"} Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.277103 5024 scope.go:117] "RemoveContainer" containerID="26b55ed6b472f46dadc27a41f7897a3bfabdcb858aae708ec8276af5a3ccf7b8" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.277304 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-5g8qv" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.301435 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5dc99dc88d-6bdv9" podStartSLOduration=3.301412243 podStartE2EDuration="3.301412243s" podCreationTimestamp="2025-11-28 17:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:22:47.260989507 +0000 UTC m=+1469.309910412" watchObservedRunningTime="2025-11-28 17:22:47.301412243 +0000 UTC m=+1469.350333148" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.316283 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-ovsdbserver-sb\") pod \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.316471 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-config\") pod \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.316541 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-dns-svc\") pod \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.316610 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqlj8\" (UniqueName: \"kubernetes.io/projected/318e36e2-e4c8-4e51-a332-4434ae8d9e53-kube-api-access-bqlj8\") pod \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.316644 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-dns-swift-storage-0\") pod \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.316697 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-ovsdbserver-nb\") pod \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\" (UID: \"318e36e2-e4c8-4e51-a332-4434ae8d9e53\") " Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.357383 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/318e36e2-e4c8-4e51-a332-4434ae8d9e53-kube-api-access-bqlj8" (OuterVolumeSpecName: "kube-api-access-bqlj8") pod "318e36e2-e4c8-4e51-a332-4434ae8d9e53" (UID: "318e36e2-e4c8-4e51-a332-4434ae8d9e53"). InnerVolumeSpecName "kube-api-access-bqlj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.410871 5024 scope.go:117] "RemoveContainer" containerID="c8f44bb5fbbc26e1201cf4d1f74233b70652e00077f3df236d7b2fe3c596d098" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.428202 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqlj8\" (UniqueName: \"kubernetes.io/projected/318e36e2-e4c8-4e51-a332-4434ae8d9e53-kube-api-access-bqlj8\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.433705 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "318e36e2-e4c8-4e51-a332-4434ae8d9e53" (UID: "318e36e2-e4c8-4e51-a332-4434ae8d9e53"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.526180 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-config" (OuterVolumeSpecName: "config") pod "318e36e2-e4c8-4e51-a332-4434ae8d9e53" (UID: "318e36e2-e4c8-4e51-a332-4434ae8d9e53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.546675 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "318e36e2-e4c8-4e51-a332-4434ae8d9e53" (UID: "318e36e2-e4c8-4e51-a332-4434ae8d9e53"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.548417 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.548455 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.548467 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.589596 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "318e36e2-e4c8-4e51-a332-4434ae8d9e53" (UID: "318e36e2-e4c8-4e51-a332-4434ae8d9e53"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.602616 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "318e36e2-e4c8-4e51-a332-4434ae8d9e53" (UID: "318e36e2-e4c8-4e51-a332-4434ae8d9e53"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.658009 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.658058 5024 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/318e36e2-e4c8-4e51-a332-4434ae8d9e53-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.785224 5024 scope.go:117] "RemoveContainer" containerID="26b55ed6b472f46dadc27a41f7897a3bfabdcb858aae708ec8276af5a3ccf7b8" Nov 28 17:22:47 crc kubenswrapper[5024]: E1128 17:22:47.788162 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26b55ed6b472f46dadc27a41f7897a3bfabdcb858aae708ec8276af5a3ccf7b8\": container with ID starting with 26b55ed6b472f46dadc27a41f7897a3bfabdcb858aae708ec8276af5a3ccf7b8 not found: ID does not exist" containerID="26b55ed6b472f46dadc27a41f7897a3bfabdcb858aae708ec8276af5a3ccf7b8" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.788270 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26b55ed6b472f46dadc27a41f7897a3bfabdcb858aae708ec8276af5a3ccf7b8"} err="failed to get container status \"26b55ed6b472f46dadc27a41f7897a3bfabdcb858aae708ec8276af5a3ccf7b8\": rpc error: code = NotFound desc = could not find container \"26b55ed6b472f46dadc27a41f7897a3bfabdcb858aae708ec8276af5a3ccf7b8\": container with ID starting with 26b55ed6b472f46dadc27a41f7897a3bfabdcb858aae708ec8276af5a3ccf7b8 not found: ID does not exist" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.788297 5024 scope.go:117] "RemoveContainer" containerID="c8f44bb5fbbc26e1201cf4d1f74233b70652e00077f3df236d7b2fe3c596d098" Nov 28 17:22:47 crc kubenswrapper[5024]: E1128 17:22:47.788773 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f44bb5fbbc26e1201cf4d1f74233b70652e00077f3df236d7b2fe3c596d098\": container with ID starting with c8f44bb5fbbc26e1201cf4d1f74233b70652e00077f3df236d7b2fe3c596d098 not found: ID does not exist" containerID="c8f44bb5fbbc26e1201cf4d1f74233b70652e00077f3df236d7b2fe3c596d098" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.788828 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f44bb5fbbc26e1201cf4d1f74233b70652e00077f3df236d7b2fe3c596d098"} err="failed to get container status \"c8f44bb5fbbc26e1201cf4d1f74233b70652e00077f3df236d7b2fe3c596d098\": rpc error: code = NotFound desc = could not find container \"c8f44bb5fbbc26e1201cf4d1f74233b70652e00077f3df236d7b2fe3c596d098\": container with ID starting with c8f44bb5fbbc26e1201cf4d1f74233b70652e00077f3df236d7b2fe3c596d098 not found: ID does not exist" Nov 28 17:22:47 crc kubenswrapper[5024]: I1128 17:22:47.971409 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-5g8qv"] Nov 28 17:22:48 crc kubenswrapper[5024]: I1128 17:22:48.010853 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-5g8qv"] Nov 28 17:22:48 crc kubenswrapper[5024]: E1128 17:22:48.140444 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod318e36e2_e4c8_4e51_a332_4434ae8d9e53.slice/crio-0a3ade2e072f0ebf71ab64a5cd069ab91a8b07e5ad4e563e4eaf06cf1d9fca46\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod318e36e2_e4c8_4e51_a332_4434ae8d9e53.slice\": RecentStats: unable to find data in memory cache]" Nov 28 17:22:48 crc kubenswrapper[5024]: I1128 17:22:48.305535 5024 generic.go:334] "Generic (PLEG): container finished" podID="c2845fcb-6cd4-46e4-b335-e319078d7ae8" containerID="d9d8f6077f1c355cf69225c0f1b1ef68e7e2e69842d9e4fec2743ff52467c769" exitCode=0 Nov 28 17:22:48 crc kubenswrapper[5024]: I1128 17:22:48.305870 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" event={"ID":"c2845fcb-6cd4-46e4-b335-e319078d7ae8","Type":"ContainerDied","Data":"d9d8f6077f1c355cf69225c0f1b1ef68e7e2e69842d9e4fec2743ff52467c769"} Nov 28 17:22:48 crc kubenswrapper[5024]: I1128 17:22:48.305929 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" event={"ID":"c2845fcb-6cd4-46e4-b335-e319078d7ae8","Type":"ContainerStarted","Data":"4cc13020ac971a2c5ec2c07ae92013420a06a64fb1274c5bafa77459c8c396ea"} Nov 28 17:22:48 crc kubenswrapper[5024]: I1128 17:22:48.314066 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ddf475b78-4qwq7" event={"ID":"de836dec-b7a6-45f0-8d8b-4d29e024e1d7","Type":"ContainerStarted","Data":"c045bbefb12ce6c914b84f9232a1c5677fc0219eb3a5bfb6be4398fbf4eb89c8"} Nov 28 17:22:48 crc kubenswrapper[5024]: I1128 17:22:48.314126 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ddf475b78-4qwq7" event={"ID":"de836dec-b7a6-45f0-8d8b-4d29e024e1d7","Type":"ContainerStarted","Data":"f52f6b114f5d0ef54e2038d403cd23a2ebfeb9005c7dc46e7b1bc640c7a4133b"} Nov 28 17:22:48 crc kubenswrapper[5024]: I1128 17:22:48.315405 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:48 crc kubenswrapper[5024]: I1128 17:22:48.369713 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7ddf475b78-4qwq7" podStartSLOduration=3.369694922 podStartE2EDuration="3.369694922s" podCreationTimestamp="2025-11-28 17:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:22:48.352209976 +0000 UTC m=+1470.401130881" watchObservedRunningTime="2025-11-28 17:22:48.369694922 +0000 UTC m=+1470.418615827" Nov 28 17:22:48 crc kubenswrapper[5024]: I1128 17:22:48.554397 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="318e36e2-e4c8-4e51-a332-4434ae8d9e53" path="/var/lib/kubelet/pods/318e36e2-e4c8-4e51-a332-4434ae8d9e53/volumes" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.336884 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" event={"ID":"c2845fcb-6cd4-46e4-b335-e319078d7ae8","Type":"ContainerStarted","Data":"ad22e31f9cb9931334e950172830f3c2556e9a12c87cd4c930e45d6ff3f40f57"} Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.337403 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.344606 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ddf475b78-4qwq7" event={"ID":"de836dec-b7a6-45f0-8d8b-4d29e024e1d7","Type":"ContainerStarted","Data":"b4eb1e043a66fec0814dfe06d769605299d7d3c61dc63522bee22b8fcab5416d"} Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.344974 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.374111 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" podStartSLOduration=4.374057767 podStartE2EDuration="4.374057767s" podCreationTimestamp="2025-11-28 17:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:22:49.367118301 +0000 UTC m=+1471.416039236" watchObservedRunningTime="2025-11-28 17:22:49.374057767 +0000 UTC m=+1471.422978672" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.710654 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-56d8644854-9v4h9"] Nov 28 17:22:49 crc kubenswrapper[5024]: E1128 17:22:49.711317 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318e36e2-e4c8-4e51-a332-4434ae8d9e53" containerName="dnsmasq-dns" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.711330 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="318e36e2-e4c8-4e51-a332-4434ae8d9e53" containerName="dnsmasq-dns" Nov 28 17:22:49 crc kubenswrapper[5024]: E1128 17:22:49.711362 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318e36e2-e4c8-4e51-a332-4434ae8d9e53" containerName="init" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.711367 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="318e36e2-e4c8-4e51-a332-4434ae8d9e53" containerName="init" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.711609 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="318e36e2-e4c8-4e51-a332-4434ae8d9e53" containerName="dnsmasq-dns" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.712865 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.722037 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.722230 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.729337 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56d8644854-9v4h9"] Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.832143 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-config-data\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.832262 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe4994f3-49cb-4dda-957d-8deb244949e7-logs\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.832290 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-public-tls-certs\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.832359 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-combined-ca-bundle\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.832421 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-internal-tls-certs\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.832469 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmlz7\" (UniqueName: \"kubernetes.io/projected/fe4994f3-49cb-4dda-957d-8deb244949e7-kube-api-access-mmlz7\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.832528 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-config-data-custom\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.938774 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-config-data\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.938886 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe4994f3-49cb-4dda-957d-8deb244949e7-logs\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.938915 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-public-tls-certs\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.938985 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-combined-ca-bundle\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.939068 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-internal-tls-certs\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.939115 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmlz7\" (UniqueName: \"kubernetes.io/projected/fe4994f3-49cb-4dda-957d-8deb244949e7-kube-api-access-mmlz7\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.939184 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-config-data-custom\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.952305 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe4994f3-49cb-4dda-957d-8deb244949e7-logs\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.969651 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-combined-ca-bundle\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.971545 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-config-data\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.971795 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-internal-tls-certs\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.975554 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-public-tls-certs\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.976002 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe4994f3-49cb-4dda-957d-8deb244949e7-config-data-custom\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:49 crc kubenswrapper[5024]: I1128 17:22:49.981604 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmlz7\" (UniqueName: \"kubernetes.io/projected/fe4994f3-49cb-4dda-957d-8deb244949e7-kube-api-access-mmlz7\") pod \"barbican-api-56d8644854-9v4h9\" (UID: \"fe4994f3-49cb-4dda-957d-8deb244949e7\") " pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:50 crc kubenswrapper[5024]: I1128 17:22:50.058135 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:22:50 crc kubenswrapper[5024]: I1128 17:22:50.362786 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32f8d83a-8bc1-446c-a314-451f4abd915b","Type":"ContainerStarted","Data":"0fcf8d629843cb5e79b698364d06e428d9bd596054ff69452e47575ed26856f3"} Nov 28 17:22:50 crc kubenswrapper[5024]: I1128 17:22:50.364035 5024 generic.go:334] "Generic (PLEG): container finished" podID="a2b6fe11-1216-4090-b1eb-fb7516bd0977" containerID="ce2e278d3f1707f10d9ad89dabc644167a10172820b7b2bcd7269601353f016a" exitCode=0 Nov 28 17:22:50 crc kubenswrapper[5024]: I1128 17:22:50.364047 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gsz7r" event={"ID":"a2b6fe11-1216-4090-b1eb-fb7516bd0977","Type":"ContainerDied","Data":"ce2e278d3f1707f10d9ad89dabc644167a10172820b7b2bcd7269601353f016a"} Nov 28 17:22:51 crc kubenswrapper[5024]: I1128 17:22:51.238845 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56d8644854-9v4h9"] Nov 28 17:22:51 crc kubenswrapper[5024]: W1128 17:22:51.241761 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe4994f3_49cb_4dda_957d_8deb244949e7.slice/crio-ce2a28752f9eba340e73c59f9e9cec987495f2d3dcb9339a91b32b1c5fd24d24 WatchSource:0}: Error finding container ce2a28752f9eba340e73c59f9e9cec987495f2d3dcb9339a91b32b1c5fd24d24: Status 404 returned error can't find the container with id ce2a28752f9eba340e73c59f9e9cec987495f2d3dcb9339a91b32b1c5fd24d24 Nov 28 17:22:51 crc kubenswrapper[5024]: I1128 17:22:51.381279 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32f8d83a-8bc1-446c-a314-451f4abd915b","Type":"ContainerStarted","Data":"29a2887b3a241d658cf3cf04eb579f411b42e1862f79f9b377cc603af31e0b19"} Nov 28 17:22:51 crc kubenswrapper[5024]: I1128 17:22:51.383414 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56d8644854-9v4h9" event={"ID":"fe4994f3-49cb-4dda-957d-8deb244949e7","Type":"ContainerStarted","Data":"ce2a28752f9eba340e73c59f9e9cec987495f2d3dcb9339a91b32b1c5fd24d24"} Nov 28 17:22:51 crc kubenswrapper[5024]: I1128 17:22:51.386642 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d88cbb66c-lp6ws" event={"ID":"a957805d-e8d1-45ac-890f-23ae1e98516a","Type":"ContainerStarted","Data":"e24343ef76082811e22cd0e27543acd4089fa4f3adb6409b156743aabeb7265d"} Nov 28 17:22:51 crc kubenswrapper[5024]: I1128 17:22:51.386689 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d88cbb66c-lp6ws" event={"ID":"a957805d-e8d1-45ac-890f-23ae1e98516a","Type":"ContainerStarted","Data":"0058465947f3ab7d34cd49f805c1cb337523058b64c08d9b77541d393a732000"} Nov 28 17:22:51 crc kubenswrapper[5024]: I1128 17:22:51.391350 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" event={"ID":"bd2c11b3-5ebf-4225-9082-40859af5a480","Type":"ContainerStarted","Data":"c7e7df21a9ff88c15a305f044257d970b2429b620aeb411e7deab959088f6cef"} Nov 28 17:22:51 crc kubenswrapper[5024]: I1128 17:22:51.391503 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" event={"ID":"bd2c11b3-5ebf-4225-9082-40859af5a480","Type":"ContainerStarted","Data":"e9bc15e78d8805e04a0ed496a82a3b785ee464ce7785510c6bb0fcc75acb9b97"} Nov 28 17:22:51 crc kubenswrapper[5024]: I1128 17:22:51.425416 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=22.425388356 podStartE2EDuration="22.425388356s" podCreationTimestamp="2025-11-28 17:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:22:51.411494222 +0000 UTC m=+1473.460415137" watchObservedRunningTime="2025-11-28 17:22:51.425388356 +0000 UTC m=+1473.474309271" Nov 28 17:22:51 crc kubenswrapper[5024]: I1128 17:22:51.450478 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6d88cbb66c-lp6ws" podStartSLOduration=2.281972233 podStartE2EDuration="6.450360034s" podCreationTimestamp="2025-11-28 17:22:45 +0000 UTC" firstStartedPulling="2025-11-28 17:22:46.519965731 +0000 UTC m=+1468.568886636" lastFinishedPulling="2025-11-28 17:22:50.688353542 +0000 UTC m=+1472.737274437" observedRunningTime="2025-11-28 17:22:51.444445577 +0000 UTC m=+1473.493366482" watchObservedRunningTime="2025-11-28 17:22:51.450360034 +0000 UTC m=+1473.499280939" Nov 28 17:22:51 crc kubenswrapper[5024]: I1128 17:22:51.486555 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6bd9bb486-bbh5j" podStartSLOduration=2.627537442 podStartE2EDuration="6.48653243s" podCreationTimestamp="2025-11-28 17:22:45 +0000 UTC" firstStartedPulling="2025-11-28 17:22:46.81827429 +0000 UTC m=+1468.867195185" lastFinishedPulling="2025-11-28 17:22:50.677269268 +0000 UTC m=+1472.726190173" observedRunningTime="2025-11-28 17:22:51.473870161 +0000 UTC m=+1473.522791066" watchObservedRunningTime="2025-11-28 17:22:51.48653243 +0000 UTC m=+1473.535453335" Nov 28 17:22:51 crc kubenswrapper[5024]: I1128 17:22:51.838872 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gsz7r" Nov 28 17:22:51 crc kubenswrapper[5024]: I1128 17:22:51.991790 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs2gd\" (UniqueName: \"kubernetes.io/projected/a2b6fe11-1216-4090-b1eb-fb7516bd0977-kube-api-access-bs2gd\") pod \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\" (UID: \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\") " Nov 28 17:22:51 crc kubenswrapper[5024]: I1128 17:22:51.991913 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b6fe11-1216-4090-b1eb-fb7516bd0977-combined-ca-bundle\") pod \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\" (UID: \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\") " Nov 28 17:22:51 crc kubenswrapper[5024]: I1128 17:22:51.992165 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b6fe11-1216-4090-b1eb-fb7516bd0977-config-data\") pod \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\" (UID: \"a2b6fe11-1216-4090-b1eb-fb7516bd0977\") " Nov 28 17:22:52 crc kubenswrapper[5024]: I1128 17:22:51.999878 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b6fe11-1216-4090-b1eb-fb7516bd0977-kube-api-access-bs2gd" (OuterVolumeSpecName: "kube-api-access-bs2gd") pod "a2b6fe11-1216-4090-b1eb-fb7516bd0977" (UID: "a2b6fe11-1216-4090-b1eb-fb7516bd0977"). InnerVolumeSpecName "kube-api-access-bs2gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:52 crc kubenswrapper[5024]: I1128 17:22:52.066178 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b6fe11-1216-4090-b1eb-fb7516bd0977-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2b6fe11-1216-4090-b1eb-fb7516bd0977" (UID: "a2b6fe11-1216-4090-b1eb-fb7516bd0977"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:52 crc kubenswrapper[5024]: I1128 17:22:52.104782 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs2gd\" (UniqueName: \"kubernetes.io/projected/a2b6fe11-1216-4090-b1eb-fb7516bd0977-kube-api-access-bs2gd\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:52 crc kubenswrapper[5024]: I1128 17:22:52.104843 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b6fe11-1216-4090-b1eb-fb7516bd0977-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:52 crc kubenswrapper[5024]: I1128 17:22:52.110078 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b6fe11-1216-4090-b1eb-fb7516bd0977-config-data" (OuterVolumeSpecName: "config-data") pod "a2b6fe11-1216-4090-b1eb-fb7516bd0977" (UID: "a2b6fe11-1216-4090-b1eb-fb7516bd0977"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:52 crc kubenswrapper[5024]: I1128 17:22:52.208045 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b6fe11-1216-4090-b1eb-fb7516bd0977-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:52 crc kubenswrapper[5024]: I1128 17:22:52.405757 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gsz7r" event={"ID":"a2b6fe11-1216-4090-b1eb-fb7516bd0977","Type":"ContainerDied","Data":"7c35306d5adf35b79f310d9d10bbe9437863f015580266837bb30874d0055757"} Nov 28 17:22:52 crc kubenswrapper[5024]: I1128 17:22:52.406076 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c35306d5adf35b79f310d9d10bbe9437863f015580266837bb30874d0055757" Nov 28 17:22:52 crc kubenswrapper[5024]: I1128 17:22:52.406039 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gsz7r" Nov 28 17:22:52 crc kubenswrapper[5024]: I1128 17:22:52.409253 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56d8644854-9v4h9" event={"ID":"fe4994f3-49cb-4dda-957d-8deb244949e7","Type":"ContainerStarted","Data":"3c1a94455d89e419b38637d53c45b1e902394d9bcbffb997dc1b40467b98ebfd"} Nov 28 17:22:55 crc kubenswrapper[5024]: I1128 17:22:55.069850 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 28 17:22:55 crc kubenswrapper[5024]: I1128 17:22:55.531246 5024 generic.go:334] "Generic (PLEG): container finished" podID="c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6" containerID="be0b1636858f531c9152dae25d7e3f478603251ec2aa68ea14b1d021b63cb264" exitCode=0 Nov 28 17:22:55 crc kubenswrapper[5024]: I1128 17:22:55.531365 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ppx6b" event={"ID":"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6","Type":"ContainerDied","Data":"be0b1636858f531c9152dae25d7e3f478603251ec2aa68ea14b1d021b63cb264"} Nov 28 17:22:55 crc kubenswrapper[5024]: I1128 17:22:55.554217 5024 generic.go:334] "Generic (PLEG): container finished" podID="92cbe84b-cd7a-4f20-8aab-92fd90f0c939" containerID="16ddd04424ccdaf052f15899fd9579c200e2dc5ef6bb7c9a3b36fade3093d5dd" exitCode=0 Nov 28 17:22:55 crc kubenswrapper[5024]: I1128 17:22:55.554272 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bkwj2" event={"ID":"92cbe84b-cd7a-4f20-8aab-92fd90f0c939","Type":"ContainerDied","Data":"16ddd04424ccdaf052f15899fd9579c200e2dc5ef6bb7c9a3b36fade3093d5dd"} Nov 28 17:22:56 crc kubenswrapper[5024]: I1128 17:22:56.122892 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:22:56 crc kubenswrapper[5024]: I1128 17:22:56.279689 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bnvhr"] Nov 28 17:22:56 crc kubenswrapper[5024]: I1128 17:22:56.280199 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" podUID="32ab0e88-ae1b-4f41-9301-d419935f30df" containerName="dnsmasq-dns" containerID="cri-o://8c23faba98b605c9abe4db3008c58d98113e72a7823fd2e41f37b6282b2f14c1" gracePeriod=10 Nov 28 17:22:56 crc kubenswrapper[5024]: I1128 17:22:56.574941 5024 generic.go:334] "Generic (PLEG): container finished" podID="32ab0e88-ae1b-4f41-9301-d419935f30df" containerID="8c23faba98b605c9abe4db3008c58d98113e72a7823fd2e41f37b6282b2f14c1" exitCode=0 Nov 28 17:22:56 crc kubenswrapper[5024]: I1128 17:22:56.575035 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" event={"ID":"32ab0e88-ae1b-4f41-9301-d419935f30df","Type":"ContainerDied","Data":"8c23faba98b605c9abe4db3008c58d98113e72a7823fd2e41f37b6282b2f14c1"} Nov 28 17:22:57 crc kubenswrapper[5024]: I1128 17:22:57.752498 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.039443 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.641818 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bkwj2" event={"ID":"92cbe84b-cd7a-4f20-8aab-92fd90f0c939","Type":"ContainerDied","Data":"e12a8200e157b087dd5570d16453c6dfc8cb94033e7f575cccd5f30f2db3d85a"} Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.641857 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e12a8200e157b087dd5570d16453c6dfc8cb94033e7f575cccd5f30f2db3d85a" Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.642864 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.813146 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-combined-ca-bundle\") pod \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.813510 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p74kh\" (UniqueName: \"kubernetes.io/projected/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-kube-api-access-p74kh\") pod \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.813711 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-etc-machine-id\") pod \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.813752 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-config-data\") pod \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.813794 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-scripts\") pod \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.813810 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-db-sync-config-data\") pod \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\" (UID: \"92cbe84b-cd7a-4f20-8aab-92fd90f0c939\") " Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.815607 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "92cbe84b-cd7a-4f20-8aab-92fd90f0c939" (UID: "92cbe84b-cd7a-4f20-8aab-92fd90f0c939"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.836728 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-scripts" (OuterVolumeSpecName: "scripts") pod "92cbe84b-cd7a-4f20-8aab-92fd90f0c939" (UID: "92cbe84b-cd7a-4f20-8aab-92fd90f0c939"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.838120 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-kube-api-access-p74kh" (OuterVolumeSpecName: "kube-api-access-p74kh") pod "92cbe84b-cd7a-4f20-8aab-92fd90f0c939" (UID: "92cbe84b-cd7a-4f20-8aab-92fd90f0c939"). InnerVolumeSpecName "kube-api-access-p74kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.858263 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "92cbe84b-cd7a-4f20-8aab-92fd90f0c939" (UID: "92cbe84b-cd7a-4f20-8aab-92fd90f0c939"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.884130 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92cbe84b-cd7a-4f20-8aab-92fd90f0c939" (UID: "92cbe84b-cd7a-4f20-8aab-92fd90f0c939"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.916245 5024 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.916830 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.916921 5024 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.916985 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.917067 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p74kh\" (UniqueName: \"kubernetes.io/projected/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-kube-api-access-p74kh\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:58 crc kubenswrapper[5024]: I1128 17:22:58.943721 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-config-data" (OuterVolumeSpecName: "config-data") pod "92cbe84b-cd7a-4f20-8aab-92fd90f0c939" (UID: "92cbe84b-cd7a-4f20-8aab-92fd90f0c939"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.019648 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cbe84b-cd7a-4f20-8aab-92fd90f0c939-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.684901 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bkwj2" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.793864 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ppx6b" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.878638 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:22:59 crc kubenswrapper[5024]: E1128 17:22:59.879212 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92cbe84b-cd7a-4f20-8aab-92fd90f0c939" containerName="cinder-db-sync" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.879229 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="92cbe84b-cd7a-4f20-8aab-92fd90f0c939" containerName="cinder-db-sync" Nov 28 17:22:59 crc kubenswrapper[5024]: E1128 17:22:59.879246 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b6fe11-1216-4090-b1eb-fb7516bd0977" containerName="heat-db-sync" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.879252 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b6fe11-1216-4090-b1eb-fb7516bd0977" containerName="heat-db-sync" Nov 28 17:22:59 crc kubenswrapper[5024]: E1128 17:22:59.879271 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6" containerName="glance-db-sync" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.879277 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6" containerName="glance-db-sync" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.879546 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6" containerName="glance-db-sync" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.879561 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b6fe11-1216-4090-b1eb-fb7516bd0977" containerName="heat-db-sync" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.879584 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="92cbe84b-cd7a-4f20-8aab-92fd90f0c939" containerName="cinder-db-sync" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.881226 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.886561 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.886800 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.887000 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.887194 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2crv7" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.893387 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.937595 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb99t\" (UniqueName: \"kubernetes.io/projected/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-kube-api-access-rb99t\") pod \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.937733 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-combined-ca-bundle\") pod \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.937793 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-config-data\") pod \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.937842 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-db-sync-config-data\") pod \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\" (UID: \"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6\") " Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.948752 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-kube-api-access-rb99t" (OuterVolumeSpecName: "kube-api-access-rb99t") pod "c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6" (UID: "c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6"). InnerVolumeSpecName "kube-api-access-rb99t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:59 crc kubenswrapper[5024]: I1128 17:22:59.968380 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6" (UID: "c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.011872 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c68459c4c-qqfdf"] Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.014593 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.038644 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c68459c4c-qqfdf"] Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.039925 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6b35e94-ac6f-43de-8b71-9785ed09145f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.039976 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.040106 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-config-data\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.040140 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.040159 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-scripts\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.040218 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crrmr\" (UniqueName: \"kubernetes.io/projected/c6b35e94-ac6f-43de-8b71-9785ed09145f-kube-api-access-crrmr\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.040266 5024 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.040279 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb99t\" (UniqueName: \"kubernetes.io/projected/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-kube-api-access-rb99t\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.040872 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6" (UID: "c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.077396 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.097825 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.130919 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-config-data" (OuterVolumeSpecName: "config-data") pod "c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6" (UID: "c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.143489 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.144101 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-scripts\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.144240 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jjs4\" (UniqueName: \"kubernetes.io/projected/40c77c2b-ab46-4a48-86e3-350908cc9c8e-kube-api-access-4jjs4\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.144387 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-ovsdbserver-sb\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.144486 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crrmr\" (UniqueName: \"kubernetes.io/projected/c6b35e94-ac6f-43de-8b71-9785ed09145f-kube-api-access-crrmr\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.144606 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6b35e94-ac6f-43de-8b71-9785ed09145f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.144705 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-config\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.144825 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.144965 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-dns-svc\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.145279 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-ovsdbserver-nb\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.145397 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-dns-swift-storage-0\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.145505 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-config-data\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.145679 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.145955 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.146754 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6b35e94-ac6f-43de-8b71-9785ed09145f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.149615 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.150204 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-config-data\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.153844 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.153861 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-scripts\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.171047 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crrmr\" (UniqueName: \"kubernetes.io/projected/c6b35e94-ac6f-43de-8b71-9785ed09145f-kube-api-access-crrmr\") pod \"cinder-scheduler-0\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.236727 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.249120 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-ovsdbserver-nb\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.249186 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-dns-swift-storage-0\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.249342 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jjs4\" (UniqueName: \"kubernetes.io/projected/40c77c2b-ab46-4a48-86e3-350908cc9c8e-kube-api-access-4jjs4\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.249419 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-ovsdbserver-sb\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.249485 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-config\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.249648 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-dns-svc\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.250401 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-dns-swift-storage-0\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.250420 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-ovsdbserver-nb\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.250637 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-config\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.251180 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-dns-svc\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.251185 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-ovsdbserver-sb\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.263002 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.265041 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.267888 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.286360 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.296925 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jjs4\" (UniqueName: \"kubernetes.io/projected/40c77c2b-ab46-4a48-86e3-350908cc9c8e-kube-api-access-4jjs4\") pod \"dnsmasq-dns-7c68459c4c-qqfdf\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.351447 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.351525 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-config-data\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.351553 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.351606 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-scripts\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.351685 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f95nh\" (UniqueName: \"kubernetes.io/projected/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-kube-api-access-f95nh\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.351773 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.351809 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-logs\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.448690 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.453474 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.453551 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-logs\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.453608 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.453658 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-config-data\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.453682 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.453732 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-scripts\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.453834 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f95nh\" (UniqueName: \"kubernetes.io/projected/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-kube-api-access-f95nh\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.454309 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.454696 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-logs\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.458849 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.459172 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-config-data\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.459515 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-scripts\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.459564 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.473119 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f95nh\" (UniqueName: \"kubernetes.io/projected/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-kube-api-access-f95nh\") pod \"cinder-api-0\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.590070 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.674664 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.704032 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ppx6b" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.704864 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ppx6b" event={"ID":"c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6","Type":"ContainerDied","Data":"a3f4b6a85420a8c604c8cc3ad76b468e7ef05f5b261219ae4dc4c0d771e469d4"} Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.704900 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3f4b6a85420a8c604c8cc3ad76b468e7ef05f5b261219ae4dc4c0d771e469d4" Nov 28 17:23:00 crc kubenswrapper[5024]: I1128 17:23:00.727997 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.322781 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c68459c4c-qqfdf"] Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.366351 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-qgvxf"] Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.373609 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.373680 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.389129 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-ovsdbserver-nb\") pod \"32ab0e88-ae1b-4f41-9301-d419935f30df\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.389205 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-ovsdbserver-sb\") pod \"32ab0e88-ae1b-4f41-9301-d419935f30df\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.389245 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-dns-svc\") pod \"32ab0e88-ae1b-4f41-9301-d419935f30df\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.389295 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-config\") pod \"32ab0e88-ae1b-4f41-9301-d419935f30df\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.389389 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-dns-swift-storage-0\") pod \"32ab0e88-ae1b-4f41-9301-d419935f30df\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.389528 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mtwc\" (UniqueName: \"kubernetes.io/projected/32ab0e88-ae1b-4f41-9301-d419935f30df-kube-api-access-8mtwc\") pod \"32ab0e88-ae1b-4f41-9301-d419935f30df\" (UID: \"32ab0e88-ae1b-4f41-9301-d419935f30df\") " Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.409883 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ab0e88-ae1b-4f41-9301-d419935f30df-kube-api-access-8mtwc" (OuterVolumeSpecName: "kube-api-access-8mtwc") pod "32ab0e88-ae1b-4f41-9301-d419935f30df" (UID: "32ab0e88-ae1b-4f41-9301-d419935f30df"). InnerVolumeSpecName "kube-api-access-8mtwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.425418 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-qgvxf"] Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.493506 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-config\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.493571 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.493649 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jfms\" (UniqueName: \"kubernetes.io/projected/5449af6d-cc03-476d-b27c-b2932a79761b-kube-api-access-5jfms\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.493742 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.493844 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.493914 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.493986 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mtwc\" (UniqueName: \"kubernetes.io/projected/32ab0e88-ae1b-4f41-9301-d419935f30df-kube-api-access-8mtwc\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.591968 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-config" (OuterVolumeSpecName: "config") pod "32ab0e88-ae1b-4f41-9301-d419935f30df" (UID: "32ab0e88-ae1b-4f41-9301-d419935f30df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.601363 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.601452 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.601482 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-config\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.601508 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.601592 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jfms\" (UniqueName: \"kubernetes.io/projected/5449af6d-cc03-476d-b27c-b2932a79761b-kube-api-access-5jfms\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.601666 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.601787 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.602977 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.603895 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.603993 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-config\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.604180 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.604448 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.618585 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "32ab0e88-ae1b-4f41-9301-d419935f30df" (UID: "32ab0e88-ae1b-4f41-9301-d419935f30df"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.626756 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jfms\" (UniqueName: \"kubernetes.io/projected/5449af6d-cc03-476d-b27c-b2932a79761b-kube-api-access-5jfms\") pod \"dnsmasq-dns-5c9776ccc5-qgvxf\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.643502 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "32ab0e88-ae1b-4f41-9301-d419935f30df" (UID: "32ab0e88-ae1b-4f41-9301-d419935f30df"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.670753 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32ab0e88-ae1b-4f41-9301-d419935f30df" (UID: "32ab0e88-ae1b-4f41-9301-d419935f30df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.712999 5024 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.713510 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.713528 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.729600 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.731788 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "32ab0e88-ae1b-4f41-9301-d419935f30df" (UID: "32ab0e88-ae1b-4f41-9301-d419935f30df"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.744437 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" event={"ID":"32ab0e88-ae1b-4f41-9301-d419935f30df","Type":"ContainerDied","Data":"9204937e03af9e8aba22aec0742518e68df4c758219fb993df88ecdebd32f4f8"} Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.744872 5024 scope.go:117] "RemoveContainer" containerID="8c23faba98b605c9abe4db3008c58d98113e72a7823fd2e41f37b6282b2f14c1" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.745270 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.816386 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32ab0e88-ae1b-4f41-9301-d419935f30df-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.953462 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bnvhr"] Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.962325 5024 scope.go:117] "RemoveContainer" containerID="4d290a90292ff93a1583080f162ca4a0cc766bdace50735c3ffda47d59660a2c" Nov 28 17:23:01 crc kubenswrapper[5024]: I1128 17:23:01.972528 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bnvhr"] Nov 28 17:23:02 crc kubenswrapper[5024]: E1128 17:23:02.001520 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.063987 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c68459c4c-qqfdf"] Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.173195 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:02 crc kubenswrapper[5024]: E1128 17:23:02.174239 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ab0e88-ae1b-4f41-9301-d419935f30df" containerName="init" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.174291 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ab0e88-ae1b-4f41-9301-d419935f30df" containerName="init" Nov 28 17:23:02 crc kubenswrapper[5024]: E1128 17:23:02.174476 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ab0e88-ae1b-4f41-9301-d419935f30df" containerName="dnsmasq-dns" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.174494 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ab0e88-ae1b-4f41-9301-d419935f30df" containerName="dnsmasq-dns" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.174990 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ab0e88-ae1b-4f41-9301-d419935f30df" containerName="dnsmasq-dns" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.177095 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.180703 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mcqcv" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.181059 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.181321 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.185386 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.234250 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.234338 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzj2t\" (UniqueName: \"kubernetes.io/projected/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-kube-api-access-rzj2t\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.234389 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-logs\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.234453 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.234527 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.234570 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.234625 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.277908 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:23:02 crc kubenswrapper[5024]: W1128 17:23:02.282097 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6b35e94_ac6f_43de_8b71_9785ed09145f.slice/crio-f4553f2cdb8afcfcac0783ae01626e4feeba14fbcfc834b454e4225d049b95c2 WatchSource:0}: Error finding container f4553f2cdb8afcfcac0783ae01626e4feeba14fbcfc834b454e4225d049b95c2: Status 404 returned error can't find the container with id f4553f2cdb8afcfcac0783ae01626e4feeba14fbcfc834b454e4225d049b95c2 Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.336379 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.336453 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.336512 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.336655 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.336719 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzj2t\" (UniqueName: \"kubernetes.io/projected/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-kube-api-access-rzj2t\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.336784 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-logs\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.336843 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.336921 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.337457 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.340797 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-logs\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.342797 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.352289 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.362118 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.392543 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzj2t\" (UniqueName: \"kubernetes.io/projected/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-kube-api-access-rzj2t\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.468332 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.562582 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.664796 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ab0e88-ae1b-4f41-9301-d419935f30df" path="/var/lib/kubelet/pods/32ab0e88-ae1b-4f41-9301-d419935f30df/volumes" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.690804 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.690844 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.694567 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.701138 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.712434 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.758768 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-qgvxf"] Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.779523 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c6b35e94-ac6f-43de-8b71-9785ed09145f","Type":"ContainerStarted","Data":"f4553f2cdb8afcfcac0783ae01626e4feeba14fbcfc834b454e4225d049b95c2"} Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.781013 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56d8644854-9v4h9" event={"ID":"fe4994f3-49cb-4dda-957d-8deb244949e7","Type":"ContainerStarted","Data":"247afff91ae66b7e45c189705a9caea236c3d66ab042a572497038777681681e"} Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.782451 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.782519 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.786070 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a","Type":"ContainerStarted","Data":"28a5a5ba69ef115490c934a4229963e6663e2ff4aff6813acc625bc5db19de9d"} Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.786234 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerName="ceilometer-notification-agent" containerID="cri-o://90a985202a1ef81023bc9287d63a905b7aa57476be0f6d055af451275a6f8b50" gracePeriod=30 Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.786302 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.786233 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerName="proxy-httpd" containerID="cri-o://28a5a5ba69ef115490c934a4229963e6663e2ff4aff6813acc625bc5db19de9d" gracePeriod=30 Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.786244 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerName="sg-core" containerID="cri-o://37b1eac94f87cd8b74ea3388c99c54b884a808bc0fbfc8bcea31519e88d93391" gracePeriod=30 Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.792347 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d40d007c-1b46-49b2-b8ef-5c5332ba74b7","Type":"ContainerStarted","Data":"9db52a7a5bcd07b5a58b6b48be61ec7f0dbd23760c1d92572a1e630d460d86e0"} Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.795702 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" event={"ID":"40c77c2b-ab46-4a48-86e3-350908cc9c8e","Type":"ContainerStarted","Data":"7dc82a2f14e71e3e1bd62d7f51d6c86729815edb8d9c4b08d11b29df2fdca20f"} Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.801259 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" event={"ID":"5449af6d-cc03-476d-b27c-b2932a79761b","Type":"ContainerStarted","Data":"0731a69ef0828fca883faa5ed55549c5b9d7ec8ac701ddcabfe8ff2ab53ca4fc"} Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.803373 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-logs\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.803436 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.803617 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.803789 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.803938 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.804151 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.804283 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjfds\" (UniqueName: \"kubernetes.io/projected/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-kube-api-access-wjfds\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.832036 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-56d8644854-9v4h9" podStartSLOduration=13.831969076 podStartE2EDuration="13.831969076s" podCreationTimestamp="2025-11-28 17:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:02.817114294 +0000 UTC m=+1484.866035199" watchObservedRunningTime="2025-11-28 17:23:02.831969076 +0000 UTC m=+1484.880889981" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.906661 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.906737 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.906767 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjfds\" (UniqueName: \"kubernetes.io/projected/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-kube-api-access-wjfds\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.906923 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-logs\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.906948 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.906972 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.907034 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.907247 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.908664 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.908961 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-logs\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.913436 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.929375 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.929440 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.939178 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjfds\" (UniqueName: \"kubernetes.io/projected/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-kube-api-access-wjfds\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:02 crc kubenswrapper[5024]: I1128 17:23:02.940254 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:03 crc kubenswrapper[5024]: I1128 17:23:03.077982 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:03 crc kubenswrapper[5024]: I1128 17:23:03.635814 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:03 crc kubenswrapper[5024]: W1128 17:23:03.643466 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c51cff9_9d4a_4516_bc33_9fbf7e52783a.slice/crio-d1fff738d625764a346834b2db7f7f83123d1b864b8cfee6acb15d7ab1eda1ed WatchSource:0}: Error finding container d1fff738d625764a346834b2db7f7f83123d1b864b8cfee6acb15d7ab1eda1ed: Status 404 returned error can't find the container with id d1fff738d625764a346834b2db7f7f83123d1b864b8cfee6acb15d7ab1eda1ed Nov 28 17:23:03 crc kubenswrapper[5024]: I1128 17:23:03.827549 5024 generic.go:334] "Generic (PLEG): container finished" podID="40c77c2b-ab46-4a48-86e3-350908cc9c8e" containerID="bfd5e7d96a0aad8b6da29d2b8cdbf6162d79be34f93bdc8037842d31c3b2e8e3" exitCode=0 Nov 28 17:23:03 crc kubenswrapper[5024]: I1128 17:23:03.827610 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" event={"ID":"40c77c2b-ab46-4a48-86e3-350908cc9c8e","Type":"ContainerDied","Data":"bfd5e7d96a0aad8b6da29d2b8cdbf6162d79be34f93bdc8037842d31c3b2e8e3"} Nov 28 17:23:03 crc kubenswrapper[5024]: I1128 17:23:03.833528 5024 generic.go:334] "Generic (PLEG): container finished" podID="5449af6d-cc03-476d-b27c-b2932a79761b" containerID="e5968e78e28594d50ec8ba60a7d8c481840b7f21e8f3044de5fb2d53275b3e30" exitCode=0 Nov 28 17:23:03 crc kubenswrapper[5024]: I1128 17:23:03.833606 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" event={"ID":"5449af6d-cc03-476d-b27c-b2932a79761b","Type":"ContainerDied","Data":"e5968e78e28594d50ec8ba60a7d8c481840b7f21e8f3044de5fb2d53275b3e30"} Nov 28 17:23:03 crc kubenswrapper[5024]: I1128 17:23:03.838846 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c51cff9-9d4a-4516-bc33-9fbf7e52783a","Type":"ContainerStarted","Data":"d1fff738d625764a346834b2db7f7f83123d1b864b8cfee6acb15d7ab1eda1ed"} Nov 28 17:23:03 crc kubenswrapper[5024]: I1128 17:23:03.851908 5024 generic.go:334] "Generic (PLEG): container finished" podID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerID="28a5a5ba69ef115490c934a4229963e6663e2ff4aff6813acc625bc5db19de9d" exitCode=0 Nov 28 17:23:03 crc kubenswrapper[5024]: I1128 17:23:03.851937 5024 generic.go:334] "Generic (PLEG): container finished" podID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerID="37b1eac94f87cd8b74ea3388c99c54b884a808bc0fbfc8bcea31519e88d93391" exitCode=2 Nov 28 17:23:03 crc kubenswrapper[5024]: I1128 17:23:03.858755 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a","Type":"ContainerDied","Data":"28a5a5ba69ef115490c934a4229963e6663e2ff4aff6813acc625bc5db19de9d"} Nov 28 17:23:03 crc kubenswrapper[5024]: I1128 17:23:03.858827 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a","Type":"ContainerDied","Data":"37b1eac94f87cd8b74ea3388c99c54b884a808bc0fbfc8bcea31519e88d93391"} Nov 28 17:23:03 crc kubenswrapper[5024]: I1128 17:23:03.918162 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7978574989-5r9v4" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.019287 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.042939 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58c9d5dbb8-n2r5j"] Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.043211 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58c9d5dbb8-n2r5j" podUID="46253c13-9836-4929-8fdd-a2ce0060f149" containerName="neutron-api" containerID="cri-o://dcbfbe7a9970714e3d892d8691856819bd96023cf5f79397311ebc29b3997dfe" gracePeriod=30 Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.043301 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58c9d5dbb8-n2r5j" podUID="46253c13-9836-4929-8fdd-a2ce0060f149" containerName="neutron-httpd" containerID="cri-o://1f59b3a535dd27a947e5f56189231f63b538e89df8e7fc281f3c96a88fbab74c" gracePeriod=30 Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.296440 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.398479 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-config\") pod \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.398526 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jjs4\" (UniqueName: \"kubernetes.io/projected/40c77c2b-ab46-4a48-86e3-350908cc9c8e-kube-api-access-4jjs4\") pod \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.398596 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-dns-svc\") pod \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.398662 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-ovsdbserver-nb\") pod \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.398710 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-ovsdbserver-sb\") pod \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.398761 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-dns-swift-storage-0\") pod \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\" (UID: \"40c77c2b-ab46-4a48-86e3-350908cc9c8e\") " Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.434408 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40c77c2b-ab46-4a48-86e3-350908cc9c8e-kube-api-access-4jjs4" (OuterVolumeSpecName: "kube-api-access-4jjs4") pod "40c77c2b-ab46-4a48-86e3-350908cc9c8e" (UID: "40c77c2b-ab46-4a48-86e3-350908cc9c8e"). InnerVolumeSpecName "kube-api-access-4jjs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.462847 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "40c77c2b-ab46-4a48-86e3-350908cc9c8e" (UID: "40c77c2b-ab46-4a48-86e3-350908cc9c8e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.489203 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-bnvhr" podUID="32ab0e88-ae1b-4f41-9301-d419935f30df" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.181:5353: i/o timeout" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.511979 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jjs4\" (UniqueName: \"kubernetes.io/projected/40c77c2b-ab46-4a48-86e3-350908cc9c8e-kube-api-access-4jjs4\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.520004 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.566313 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40c77c2b-ab46-4a48-86e3-350908cc9c8e" (UID: "40c77c2b-ab46-4a48-86e3-350908cc9c8e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.580048 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-config" (OuterVolumeSpecName: "config") pod "40c77c2b-ab46-4a48-86e3-350908cc9c8e" (UID: "40c77c2b-ab46-4a48-86e3-350908cc9c8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.629115 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.629525 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.645540 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "40c77c2b-ab46-4a48-86e3-350908cc9c8e" (UID: "40c77c2b-ab46-4a48-86e3-350908cc9c8e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.648098 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "40c77c2b-ab46-4a48-86e3-350908cc9c8e" (UID: "40c77c2b-ab46-4a48-86e3-350908cc9c8e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.731119 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.731146 5024 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40c77c2b-ab46-4a48-86e3-350908cc9c8e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.855301 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.922058 5024 generic.go:334] "Generic (PLEG): container finished" podID="46253c13-9836-4929-8fdd-a2ce0060f149" containerID="1f59b3a535dd27a947e5f56189231f63b538e89df8e7fc281f3c96a88fbab74c" exitCode=0 Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.922258 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c9d5dbb8-n2r5j" event={"ID":"46253c13-9836-4929-8fdd-a2ce0060f149","Type":"ContainerDied","Data":"1f59b3a535dd27a947e5f56189231f63b538e89df8e7fc281f3c96a88fbab74c"} Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.925752 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda","Type":"ContainerStarted","Data":"5738c968897557762b0d8220442d7a85a8a8a918264f538a0063e810bd4dcb97"} Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.940810 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d40d007c-1b46-49b2-b8ef-5c5332ba74b7","Type":"ContainerStarted","Data":"8c645f19ca0df6e48f5bf2ffd1f71ba8abc9af80c7274e25622aa89be140731a"} Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.953807 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" event={"ID":"5449af6d-cc03-476d-b27c-b2932a79761b","Type":"ContainerStarted","Data":"c7106fb736d08ea618744a44ef20674750bafed77585fddc9b4ea6b179f532b3"} Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.956325 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.966115 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c51cff9-9d4a-4516-bc33-9fbf7e52783a","Type":"ContainerStarted","Data":"ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd"} Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.974446 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" event={"ID":"40c77c2b-ab46-4a48-86e3-350908cc9c8e","Type":"ContainerDied","Data":"7dc82a2f14e71e3e1bd62d7f51d6c86729815edb8d9c4b08d11b29df2fdca20f"} Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.974523 5024 scope.go:117] "RemoveContainer" containerID="bfd5e7d96a0aad8b6da29d2b8cdbf6162d79be34f93bdc8037842d31c3b2e8e3" Nov 28 17:23:04 crc kubenswrapper[5024]: I1128 17:23:04.974705 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c68459c4c-qqfdf" Nov 28 17:23:05 crc kubenswrapper[5024]: I1128 17:23:05.011279 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" podStartSLOduration=4.011252833 podStartE2EDuration="4.011252833s" podCreationTimestamp="2025-11-28 17:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:05.00055766 +0000 UTC m=+1487.049478575" watchObservedRunningTime="2025-11-28 17:23:05.011252833 +0000 UTC m=+1487.060173738" Nov 28 17:23:05 crc kubenswrapper[5024]: I1128 17:23:05.144379 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c68459c4c-qqfdf"] Nov 28 17:23:05 crc kubenswrapper[5024]: I1128 17:23:05.161880 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c68459c4c-qqfdf"] Nov 28 17:23:05 crc kubenswrapper[5024]: I1128 17:23:05.315273 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:23:05 crc kubenswrapper[5024]: I1128 17:23:05.865007 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:05 crc kubenswrapper[5024]: I1128 17:23:05.951934 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:06 crc kubenswrapper[5024]: I1128 17:23:06.013327 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda","Type":"ContainerStarted","Data":"23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991"} Nov 28 17:23:06 crc kubenswrapper[5024]: I1128 17:23:06.030265 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d40d007c-1b46-49b2-b8ef-5c5332ba74b7","Type":"ContainerStarted","Data":"3976e9447f0abe3435e5849f89ae77b814686ce3093e4ae7c3d0c6f6edca8941"} Nov 28 17:23:06 crc kubenswrapper[5024]: I1128 17:23:06.030803 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d40d007c-1b46-49b2-b8ef-5c5332ba74b7" containerName="cinder-api-log" containerID="cri-o://8c645f19ca0df6e48f5bf2ffd1f71ba8abc9af80c7274e25622aa89be140731a" gracePeriod=30 Nov 28 17:23:06 crc kubenswrapper[5024]: I1128 17:23:06.031090 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 17:23:06 crc kubenswrapper[5024]: I1128 17:23:06.031410 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d40d007c-1b46-49b2-b8ef-5c5332ba74b7" containerName="cinder-api" containerID="cri-o://3976e9447f0abe3435e5849f89ae77b814686ce3093e4ae7c3d0c6f6edca8941" gracePeriod=30 Nov 28 17:23:06 crc kubenswrapper[5024]: I1128 17:23:06.042841 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c51cff9-9d4a-4516-bc33-9fbf7e52783a","Type":"ContainerStarted","Data":"4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba"} Nov 28 17:23:06 crc kubenswrapper[5024]: I1128 17:23:06.042962 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1c51cff9-9d4a-4516-bc33-9fbf7e52783a" containerName="glance-log" containerID="cri-o://ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd" gracePeriod=30 Nov 28 17:23:06 crc kubenswrapper[5024]: I1128 17:23:06.043181 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1c51cff9-9d4a-4516-bc33-9fbf7e52783a" containerName="glance-httpd" containerID="cri-o://4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba" gracePeriod=30 Nov 28 17:23:06 crc kubenswrapper[5024]: I1128 17:23:06.078285 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c6b35e94-ac6f-43de-8b71-9785ed09145f","Type":"ContainerStarted","Data":"cbb52da957ee3d0eece9f1affadb365e5633b75888ca09b0df71d18f3fcad333"} Nov 28 17:23:06 crc kubenswrapper[5024]: I1128 17:23:06.081581 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.081085226 podStartE2EDuration="6.081085226s" podCreationTimestamp="2025-11-28 17:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:06.047206145 +0000 UTC m=+1488.096127050" watchObservedRunningTime="2025-11-28 17:23:06.081085226 +0000 UTC m=+1488.130006141" Nov 28 17:23:06 crc kubenswrapper[5024]: I1128 17:23:06.117581 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.11755865 podStartE2EDuration="5.11755865s" podCreationTimestamp="2025-11-28 17:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:06.071607317 +0000 UTC m=+1488.120528222" watchObservedRunningTime="2025-11-28 17:23:06.11755865 +0000 UTC m=+1488.166479555" Nov 28 17:23:06 crc kubenswrapper[5024]: I1128 17:23:06.517777 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40c77c2b-ab46-4a48-86e3-350908cc9c8e" path="/var/lib/kubelet/pods/40c77c2b-ab46-4a48-86e3-350908cc9c8e/volumes" Nov 28 17:23:06 crc kubenswrapper[5024]: I1128 17:23:06.962709 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.104736 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.104627 5024 generic.go:334] "Generic (PLEG): container finished" podID="1c51cff9-9d4a-4516-bc33-9fbf7e52783a" containerID="4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba" exitCode=143 Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.104812 5024 generic.go:334] "Generic (PLEG): container finished" podID="1c51cff9-9d4a-4516-bc33-9fbf7e52783a" containerID="ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd" exitCode=143 Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.104806 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c51cff9-9d4a-4516-bc33-9fbf7e52783a","Type":"ContainerDied","Data":"4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba"} Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.104906 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c51cff9-9d4a-4516-bc33-9fbf7e52783a","Type":"ContainerDied","Data":"ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd"} Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.104921 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c51cff9-9d4a-4516-bc33-9fbf7e52783a","Type":"ContainerDied","Data":"d1fff738d625764a346834b2db7f7f83123d1b864b8cfee6acb15d7ab1eda1ed"} Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.104971 5024 scope.go:117] "RemoveContainer" containerID="4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.106690 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c6b35e94-ac6f-43de-8b71-9785ed09145f","Type":"ContainerStarted","Data":"2061e7ab8054e52669271ff1351570785882c50d57075145d94542d03e625673"} Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.108814 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda","Type":"ContainerStarted","Data":"e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428"} Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.108991 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" containerName="glance-log" containerID="cri-o://23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991" gracePeriod=30 Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.109279 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" containerName="glance-httpd" containerID="cri-o://e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428" gracePeriod=30 Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.111421 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzj2t\" (UniqueName: \"kubernetes.io/projected/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-kube-api-access-rzj2t\") pod \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.111554 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-logs\") pod \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.111747 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-config-data\") pod \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.111780 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.111850 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-httpd-run\") pod \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.111934 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-combined-ca-bundle\") pod \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.112034 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-scripts\") pod \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\" (UID: \"1c51cff9-9d4a-4516-bc33-9fbf7e52783a\") " Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.112531 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-logs" (OuterVolumeSpecName: "logs") pod "1c51cff9-9d4a-4516-bc33-9fbf7e52783a" (UID: "1c51cff9-9d4a-4516-bc33-9fbf7e52783a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.112654 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1c51cff9-9d4a-4516-bc33-9fbf7e52783a" (UID: "1c51cff9-9d4a-4516-bc33-9fbf7e52783a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.119140 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "1c51cff9-9d4a-4516-bc33-9fbf7e52783a" (UID: "1c51cff9-9d4a-4516-bc33-9fbf7e52783a"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.121747 5024 generic.go:334] "Generic (PLEG): container finished" podID="d40d007c-1b46-49b2-b8ef-5c5332ba74b7" containerID="8c645f19ca0df6e48f5bf2ffd1f71ba8abc9af80c7274e25622aa89be140731a" exitCode=143 Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.121810 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d40d007c-1b46-49b2-b8ef-5c5332ba74b7","Type":"ContainerDied","Data":"8c645f19ca0df6e48f5bf2ffd1f71ba8abc9af80c7274e25622aa89be140731a"} Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.125210 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-scripts" (OuterVolumeSpecName: "scripts") pod "1c51cff9-9d4a-4516-bc33-9fbf7e52783a" (UID: "1c51cff9-9d4a-4516-bc33-9fbf7e52783a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.135327 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-kube-api-access-rzj2t" (OuterVolumeSpecName: "kube-api-access-rzj2t") pod "1c51cff9-9d4a-4516-bc33-9fbf7e52783a" (UID: "1c51cff9-9d4a-4516-bc33-9fbf7e52783a"). InnerVolumeSpecName "kube-api-access-rzj2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.152644 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.161887656 podStartE2EDuration="8.152623477s" podCreationTimestamp="2025-11-28 17:22:59 +0000 UTC" firstStartedPulling="2025-11-28 17:23:02.285357283 +0000 UTC m=+1484.334278188" lastFinishedPulling="2025-11-28 17:23:04.276093104 +0000 UTC m=+1486.325014009" observedRunningTime="2025-11-28 17:23:07.147421939 +0000 UTC m=+1489.196342844" watchObservedRunningTime="2025-11-28 17:23:07.152623477 +0000 UTC m=+1489.201544392" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.175595 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c51cff9-9d4a-4516-bc33-9fbf7e52783a" (UID: "1c51cff9-9d4a-4516-bc33-9fbf7e52783a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.179487 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.179472298 podStartE2EDuration="6.179472298s" podCreationTimestamp="2025-11-28 17:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:07.177149152 +0000 UTC m=+1489.226070067" watchObservedRunningTime="2025-11-28 17:23:07.179472298 +0000 UTC m=+1489.228393203" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.234598 5024 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.234658 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.234669 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.234678 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzj2t\" (UniqueName: \"kubernetes.io/projected/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-kube-api-access-rzj2t\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.234687 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.234705 5024 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.248429 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-config-data" (OuterVolumeSpecName: "config-data") pod "1c51cff9-9d4a-4516-bc33-9fbf7e52783a" (UID: "1c51cff9-9d4a-4516-bc33-9fbf7e52783a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.281060 5024 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.348650 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c51cff9-9d4a-4516-bc33-9fbf7e52783a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.348704 5024 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.393999 5024 scope.go:117] "RemoveContainer" containerID="ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.434509 5024 scope.go:117] "RemoveContainer" containerID="4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba" Nov 28 17:23:07 crc kubenswrapper[5024]: E1128 17:23:07.445212 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba\": container with ID starting with 4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba not found: ID does not exist" containerID="4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.445263 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba"} err="failed to get container status \"4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba\": rpc error: code = NotFound desc = could not find container \"4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba\": container with ID starting with 4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba not found: ID does not exist" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.445300 5024 scope.go:117] "RemoveContainer" containerID="ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd" Nov 28 17:23:07 crc kubenswrapper[5024]: E1128 17:23:07.446588 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd\": container with ID starting with ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd not found: ID does not exist" containerID="ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.446636 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd"} err="failed to get container status \"ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd\": rpc error: code = NotFound desc = could not find container \"ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd\": container with ID starting with ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd not found: ID does not exist" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.446670 5024 scope.go:117] "RemoveContainer" containerID="4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.447626 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba"} err="failed to get container status \"4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba\": rpc error: code = NotFound desc = could not find container \"4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba\": container with ID starting with 4d4373abd59ea8e92d23a51d97c88a3f9c21b068c4f54b21d149ca48ce4beeba not found: ID does not exist" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.447655 5024 scope.go:117] "RemoveContainer" containerID="ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.447898 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd"} err="failed to get container status \"ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd\": rpc error: code = NotFound desc = could not find container \"ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd\": container with ID starting with ccb7540dd27b2de579ff79bda828b80f2e35959de6039c927481fe46085b43fd not found: ID does not exist" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.526415 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.565120 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.581968 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:07 crc kubenswrapper[5024]: E1128 17:23:07.582774 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c77c2b-ab46-4a48-86e3-350908cc9c8e" containerName="init" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.595562 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c77c2b-ab46-4a48-86e3-350908cc9c8e" containerName="init" Nov 28 17:23:07 crc kubenswrapper[5024]: E1128 17:23:07.595649 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c51cff9-9d4a-4516-bc33-9fbf7e52783a" containerName="glance-log" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.595657 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c51cff9-9d4a-4516-bc33-9fbf7e52783a" containerName="glance-log" Nov 28 17:23:07 crc kubenswrapper[5024]: E1128 17:23:07.595671 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c51cff9-9d4a-4516-bc33-9fbf7e52783a" containerName="glance-httpd" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.595678 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c51cff9-9d4a-4516-bc33-9fbf7e52783a" containerName="glance-httpd" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.596091 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c51cff9-9d4a-4516-bc33-9fbf7e52783a" containerName="glance-httpd" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.596116 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="40c77c2b-ab46-4a48-86e3-350908cc9c8e" containerName="init" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.596131 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c51cff9-9d4a-4516-bc33-9fbf7e52783a" containerName="glance-log" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.597938 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.601941 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.617482 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.617724 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.759625 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1435cd0-7a59-45be-9658-d875edd55a7f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.759688 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.759725 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-config-data\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.759787 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44xnn\" (UniqueName: \"kubernetes.io/projected/e1435cd0-7a59-45be-9658-d875edd55a7f-kube-api-access-44xnn\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.759895 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.760566 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.760673 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-scripts\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.760771 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1435cd0-7a59-45be-9658-d875edd55a7f-logs\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.863749 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.863878 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-scripts\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.864409 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.865669 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1435cd0-7a59-45be-9658-d875edd55a7f-logs\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.865827 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.865861 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1435cd0-7a59-45be-9658-d875edd55a7f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.865903 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-config-data\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.865975 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44xnn\" (UniqueName: \"kubernetes.io/projected/e1435cd0-7a59-45be-9658-d875edd55a7f-kube-api-access-44xnn\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.866171 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.866848 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1435cd0-7a59-45be-9658-d875edd55a7f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.867436 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1435cd0-7a59-45be-9658-d875edd55a7f-logs\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.875435 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.875913 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.878658 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-scripts\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.880019 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-config-data\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.893838 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44xnn\" (UniqueName: \"kubernetes.io/projected/e1435cd0-7a59-45be-9658-d875edd55a7f-kube-api-access-44xnn\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.943857 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:07 crc kubenswrapper[5024]: I1128 17:23:07.964165 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.165479 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.173744 5024 generic.go:334] "Generic (PLEG): container finished" podID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerID="90a985202a1ef81023bc9287d63a905b7aa57476be0f6d055af451275a6f8b50" exitCode=0 Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.173801 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a","Type":"ContainerDied","Data":"90a985202a1ef81023bc9287d63a905b7aa57476be0f6d055af451275a6f8b50"} Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.173829 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a","Type":"ContainerDied","Data":"8de2cac2887df93774ff3cdf0b2a521989e7dd9f2a06777772d5980037a00a12"} Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.173845 5024 scope.go:117] "RemoveContainer" containerID="28a5a5ba69ef115490c934a4229963e6663e2ff4aff6813acc625bc5db19de9d" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.198418 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.215562 5024 generic.go:334] "Generic (PLEG): container finished" podID="46253c13-9836-4929-8fdd-a2ce0060f149" containerID="dcbfbe7a9970714e3d892d8691856819bd96023cf5f79397311ebc29b3997dfe" exitCode=0 Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.215647 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c9d5dbb8-n2r5j" event={"ID":"46253c13-9836-4929-8fdd-a2ce0060f149","Type":"ContainerDied","Data":"dcbfbe7a9970714e3d892d8691856819bd96023cf5f79397311ebc29b3997dfe"} Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.231402 5024 generic.go:334] "Generic (PLEG): container finished" podID="f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" containerID="e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428" exitCode=0 Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.231442 5024 generic.go:334] "Generic (PLEG): container finished" podID="f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" containerID="23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991" exitCode=143 Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.231522 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda","Type":"ContainerDied","Data":"e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428"} Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.231555 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda","Type":"ContainerDied","Data":"23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991"} Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.231568 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda","Type":"ContainerDied","Data":"5738c968897557762b0d8220442d7a85a8a8a918264f538a0063e810bd4dcb97"} Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.231648 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.284209 5024 scope.go:117] "RemoveContainer" containerID="37b1eac94f87cd8b74ea3388c99c54b884a808bc0fbfc8bcea31519e88d93391" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.289875 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-config-data\") pod \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.290069 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-combined-ca-bundle\") pod \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.290158 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-httpd-run\") pod \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.290207 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjfds\" (UniqueName: \"kubernetes.io/projected/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-kube-api-access-wjfds\") pod \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.290242 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.290313 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-scripts\") pod \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.290373 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-config-data\") pod \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.290407 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-scripts\") pod \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.290437 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-log-httpd\") pod \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.290465 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx4mx\" (UniqueName: \"kubernetes.io/projected/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-kube-api-access-jx4mx\") pod \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.290527 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-sg-core-conf-yaml\") pod \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.290574 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-logs\") pod \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\" (UID: \"f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.290599 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-run-httpd\") pod \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.290646 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-combined-ca-bundle\") pod \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\" (UID: \"01acb9ec-ac92-403c-a3fc-fcbf0e3b800a\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.290904 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" (UID: "f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.291277 5024 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.291372 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" (UID: "01acb9ec-ac92-403c-a3fc-fcbf0e3b800a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.301194 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-kube-api-access-wjfds" (OuterVolumeSpecName: "kube-api-access-wjfds") pod "f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" (UID: "f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda"). InnerVolumeSpecName "kube-api-access-wjfds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.301254 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" (UID: "f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.301474 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-logs" (OuterVolumeSpecName: "logs") pod "f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" (UID: "f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.301485 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" (UID: "01acb9ec-ac92-403c-a3fc-fcbf0e3b800a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.303303 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-kube-api-access-jx4mx" (OuterVolumeSpecName: "kube-api-access-jx4mx") pod "01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" (UID: "01acb9ec-ac92-403c-a3fc-fcbf0e3b800a"). InnerVolumeSpecName "kube-api-access-jx4mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.319256 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-scripts" (OuterVolumeSpecName: "scripts") pod "01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" (UID: "01acb9ec-ac92-403c-a3fc-fcbf0e3b800a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.320277 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-scripts" (OuterVolumeSpecName: "scripts") pod "f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" (UID: "f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.330264 5024 scope.go:117] "RemoveContainer" containerID="90a985202a1ef81023bc9287d63a905b7aa57476be0f6d055af451275a6f8b50" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.349094 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.356911 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" (UID: "f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.359179 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" (UID: "01acb9ec-ac92-403c-a3fc-fcbf0e3b800a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.371631 5024 scope.go:117] "RemoveContainer" containerID="28a5a5ba69ef115490c934a4229963e6663e2ff4aff6813acc625bc5db19de9d" Nov 28 17:23:08 crc kubenswrapper[5024]: E1128 17:23:08.389054 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a5a5ba69ef115490c934a4229963e6663e2ff4aff6813acc625bc5db19de9d\": container with ID starting with 28a5a5ba69ef115490c934a4229963e6663e2ff4aff6813acc625bc5db19de9d not found: ID does not exist" containerID="28a5a5ba69ef115490c934a4229963e6663e2ff4aff6813acc625bc5db19de9d" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.389121 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a5a5ba69ef115490c934a4229963e6663e2ff4aff6813acc625bc5db19de9d"} err="failed to get container status \"28a5a5ba69ef115490c934a4229963e6663e2ff4aff6813acc625bc5db19de9d\": rpc error: code = NotFound desc = could not find container \"28a5a5ba69ef115490c934a4229963e6663e2ff4aff6813acc625bc5db19de9d\": container with ID starting with 28a5a5ba69ef115490c934a4229963e6663e2ff4aff6813acc625bc5db19de9d not found: ID does not exist" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.389151 5024 scope.go:117] "RemoveContainer" containerID="37b1eac94f87cd8b74ea3388c99c54b884a808bc0fbfc8bcea31519e88d93391" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.393737 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.393780 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjfds\" (UniqueName: \"kubernetes.io/projected/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-kube-api-access-wjfds\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.393805 5024 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.393815 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.393824 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.393832 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.393841 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx4mx\" (UniqueName: \"kubernetes.io/projected/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-kube-api-access-jx4mx\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.393851 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.393858 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.393865 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: E1128 17:23:08.396605 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b1eac94f87cd8b74ea3388c99c54b884a808bc0fbfc8bcea31519e88d93391\": container with ID starting with 37b1eac94f87cd8b74ea3388c99c54b884a808bc0fbfc8bcea31519e88d93391 not found: ID does not exist" containerID="37b1eac94f87cd8b74ea3388c99c54b884a808bc0fbfc8bcea31519e88d93391" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.396761 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37b1eac94f87cd8b74ea3388c99c54b884a808bc0fbfc8bcea31519e88d93391"} err="failed to get container status \"37b1eac94f87cd8b74ea3388c99c54b884a808bc0fbfc8bcea31519e88d93391\": rpc error: code = NotFound desc = could not find container \"37b1eac94f87cd8b74ea3388c99c54b884a808bc0fbfc8bcea31519e88d93391\": container with ID starting with 37b1eac94f87cd8b74ea3388c99c54b884a808bc0fbfc8bcea31519e88d93391 not found: ID does not exist" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.396794 5024 scope.go:117] "RemoveContainer" containerID="90a985202a1ef81023bc9287d63a905b7aa57476be0f6d055af451275a6f8b50" Nov 28 17:23:08 crc kubenswrapper[5024]: E1128 17:23:08.397340 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90a985202a1ef81023bc9287d63a905b7aa57476be0f6d055af451275a6f8b50\": container with ID starting with 90a985202a1ef81023bc9287d63a905b7aa57476be0f6d055af451275a6f8b50 not found: ID does not exist" containerID="90a985202a1ef81023bc9287d63a905b7aa57476be0f6d055af451275a6f8b50" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.397361 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90a985202a1ef81023bc9287d63a905b7aa57476be0f6d055af451275a6f8b50"} err="failed to get container status \"90a985202a1ef81023bc9287d63a905b7aa57476be0f6d055af451275a6f8b50\": rpc error: code = NotFound desc = could not find container \"90a985202a1ef81023bc9287d63a905b7aa57476be0f6d055af451275a6f8b50\": container with ID starting with 90a985202a1ef81023bc9287d63a905b7aa57476be0f6d055af451275a6f8b50 not found: ID does not exist" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.397375 5024 scope.go:117] "RemoveContainer" containerID="e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.399275 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" (UID: "01acb9ec-ac92-403c-a3fc-fcbf0e3b800a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.437180 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-config-data" (OuterVolumeSpecName: "config-data") pod "01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" (UID: "01acb9ec-ac92-403c-a3fc-fcbf0e3b800a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.444502 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-config-data" (OuterVolumeSpecName: "config-data") pod "f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" (UID: "f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.453477 5024 scope.go:117] "RemoveContainer" containerID="23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.472828 5024 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.496792 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-config\") pod \"46253c13-9836-4929-8fdd-a2ce0060f149\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.496970 5024 scope.go:117] "RemoveContainer" containerID="e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.497037 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-httpd-config\") pod \"46253c13-9836-4929-8fdd-a2ce0060f149\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.497090 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-ovndb-tls-certs\") pod \"46253c13-9836-4929-8fdd-a2ce0060f149\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.497186 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-combined-ca-bundle\") pod \"46253c13-9836-4929-8fdd-a2ce0060f149\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.497246 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljgsq\" (UniqueName: \"kubernetes.io/projected/46253c13-9836-4929-8fdd-a2ce0060f149-kube-api-access-ljgsq\") pod \"46253c13-9836-4929-8fdd-a2ce0060f149\" (UID: \"46253c13-9836-4929-8fdd-a2ce0060f149\") " Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.497670 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.497682 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.497691 5024 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.497699 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: E1128 17:23:08.503639 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428\": container with ID starting with e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428 not found: ID does not exist" containerID="e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.503694 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428"} err="failed to get container status \"e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428\": rpc error: code = NotFound desc = could not find container \"e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428\": container with ID starting with e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428 not found: ID does not exist" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.503726 5024 scope.go:117] "RemoveContainer" containerID="23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991" Nov 28 17:23:08 crc kubenswrapper[5024]: E1128 17:23:08.504073 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991\": container with ID starting with 23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991 not found: ID does not exist" containerID="23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.504105 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991"} err="failed to get container status \"23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991\": rpc error: code = NotFound desc = could not find container \"23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991\": container with ID starting with 23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991 not found: ID does not exist" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.504123 5024 scope.go:117] "RemoveContainer" containerID="e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.504659 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428"} err="failed to get container status \"e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428\": rpc error: code = NotFound desc = could not find container \"e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428\": container with ID starting with e1e9f32f21bf1a6e8c89a610e3b04cf41c6141cda30b9aa53066176134675428 not found: ID does not exist" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.504844 5024 scope.go:117] "RemoveContainer" containerID="23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.505078 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46253c13-9836-4929-8fdd-a2ce0060f149-kube-api-access-ljgsq" (OuterVolumeSpecName: "kube-api-access-ljgsq") pod "46253c13-9836-4929-8fdd-a2ce0060f149" (UID: "46253c13-9836-4929-8fdd-a2ce0060f149"). InnerVolumeSpecName "kube-api-access-ljgsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.505304 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991"} err="failed to get container status \"23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991\": rpc error: code = NotFound desc = could not find container \"23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991\": container with ID starting with 23487958b0eca85726ec59346cc61a406f959f76f7a2f2c7089bbb8e40aac991 not found: ID does not exist" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.508883 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "46253c13-9836-4929-8fdd-a2ce0060f149" (UID: "46253c13-9836-4929-8fdd-a2ce0060f149"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.528339 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c51cff9-9d4a-4516-bc33-9fbf7e52783a" path="/var/lib/kubelet/pods/1c51cff9-9d4a-4516-bc33-9fbf7e52783a/volumes" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.599122 5024 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.599148 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljgsq\" (UniqueName: \"kubernetes.io/projected/46253c13-9836-4929-8fdd-a2ce0060f149-kube-api-access-ljgsq\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.604175 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-config" (OuterVolumeSpecName: "config") pod "46253c13-9836-4929-8fdd-a2ce0060f149" (UID: "46253c13-9836-4929-8fdd-a2ce0060f149"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.628854 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.651316 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46253c13-9836-4929-8fdd-a2ce0060f149" (UID: "46253c13-9836-4929-8fdd-a2ce0060f149"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.670549 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.690174 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:08 crc kubenswrapper[5024]: E1128 17:23:08.690729 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerName="ceilometer-notification-agent" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.690753 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerName="ceilometer-notification-agent" Nov 28 17:23:08 crc kubenswrapper[5024]: E1128 17:23:08.690766 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerName="sg-core" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.690772 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerName="sg-core" Nov 28 17:23:08 crc kubenswrapper[5024]: E1128 17:23:08.690795 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerName="proxy-httpd" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.690801 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerName="proxy-httpd" Nov 28 17:23:08 crc kubenswrapper[5024]: E1128 17:23:08.690827 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46253c13-9836-4929-8fdd-a2ce0060f149" containerName="neutron-api" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.690833 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="46253c13-9836-4929-8fdd-a2ce0060f149" containerName="neutron-api" Nov 28 17:23:08 crc kubenswrapper[5024]: E1128 17:23:08.690840 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46253c13-9836-4929-8fdd-a2ce0060f149" containerName="neutron-httpd" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.690846 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="46253c13-9836-4929-8fdd-a2ce0060f149" containerName="neutron-httpd" Nov 28 17:23:08 crc kubenswrapper[5024]: E1128 17:23:08.690860 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" containerName="glance-httpd" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.690873 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" containerName="glance-httpd" Nov 28 17:23:08 crc kubenswrapper[5024]: E1128 17:23:08.690895 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" containerName="glance-log" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.690901 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" containerName="glance-log" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.691137 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerName="proxy-httpd" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.691151 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerName="ceilometer-notification-agent" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.691164 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" containerName="sg-core" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.691176 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" containerName="glance-httpd" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.691184 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="46253c13-9836-4929-8fdd-a2ce0060f149" containerName="neutron-httpd" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.691193 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="46253c13-9836-4929-8fdd-a2ce0060f149" containerName="neutron-api" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.691222 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" containerName="glance-log" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.692476 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.695319 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.696473 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.700890 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.700921 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.722305 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.753432 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "46253c13-9836-4929-8fdd-a2ce0060f149" (UID: "46253c13-9836-4929-8fdd-a2ce0060f149"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.802563 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.802635 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.802671 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.802892 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.802968 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.803055 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-logs\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.803300 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htxm2\" (UniqueName: \"kubernetes.io/projected/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-kube-api-access-htxm2\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.803347 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.803488 5024 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46253c13-9836-4929-8fdd-a2ce0060f149-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:08 crc kubenswrapper[5024]: W1128 17:23:08.820895 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1435cd0_7a59_45be_9658_d875edd55a7f.slice/crio-46d781dc7dc48f0ca4945ddafd099ae895d5dca43c32663f0f7f7e76aea9a190 WatchSource:0}: Error finding container 46d781dc7dc48f0ca4945ddafd099ae895d5dca43c32663f0f7f7e76aea9a190: Status 404 returned error can't find the container with id 46d781dc7dc48f0ca4945ddafd099ae895d5dca43c32663f0f7f7e76aea9a190 Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.830634 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.905268 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.905801 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.907354 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.907523 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.907679 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-logs\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.907832 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.908068 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-logs\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.908376 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htxm2\" (UniqueName: \"kubernetes.io/projected/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-kube-api-access-htxm2\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.908445 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.908539 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.908633 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.915431 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.927001 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.927744 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.928169 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:08 crc kubenswrapper[5024]: I1128 17:23:08.956616 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htxm2\" (UniqueName: \"kubernetes.io/projected/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-kube-api-access-htxm2\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.037274 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.251093 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.264781 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c9d5dbb8-n2r5j" event={"ID":"46253c13-9836-4929-8fdd-a2ce0060f149","Type":"ContainerDied","Data":"0424b20b779b999ed089cc6bacb152c510cbab384ec911a34e34326dc5bcd059"} Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.264836 5024 scope.go:117] "RemoveContainer" containerID="1f59b3a535dd27a947e5f56189231f63b538e89df8e7fc281f3c96a88fbab74c" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.265047 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58c9d5dbb8-n2r5j" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.292725 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1435cd0-7a59-45be-9658-d875edd55a7f","Type":"ContainerStarted","Data":"46d781dc7dc48f0ca4945ddafd099ae895d5dca43c32663f0f7f7e76aea9a190"} Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.325735 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.343050 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.360148 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.375565 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.387536 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.393623 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58c9d5dbb8-n2r5j"] Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.394388 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.398770 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.415946 5024 scope.go:117] "RemoveContainer" containerID="dcbfbe7a9970714e3d892d8691856819bd96023cf5f79397311ebc29b3997dfe" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.419160 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-58c9d5dbb8-n2r5j"] Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.441997 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.444109 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/826b547a-5534-4c11-83b5-f09a5d93e6c0-run-httpd\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.444312 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/826b547a-5534-4c11-83b5-f09a5d93e6c0-log-httpd\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.444382 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-config-data\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.444482 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.444516 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.444579 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-scripts\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.444620 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n8x2\" (UniqueName: \"kubernetes.io/projected/826b547a-5534-4c11-83b5-f09a5d93e6c0-kube-api-access-7n8x2\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.548404 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/826b547a-5534-4c11-83b5-f09a5d93e6c0-log-httpd\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.548498 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-config-data\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.548548 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.548567 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.548591 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-scripts\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.548609 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n8x2\" (UniqueName: \"kubernetes.io/projected/826b547a-5534-4c11-83b5-f09a5d93e6c0-kube-api-access-7n8x2\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.548793 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/826b547a-5534-4c11-83b5-f09a5d93e6c0-run-httpd\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.550585 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/826b547a-5534-4c11-83b5-f09a5d93e6c0-run-httpd\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.550842 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/826b547a-5534-4c11-83b5-f09a5d93e6c0-log-httpd\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.566972 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.568384 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.569595 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n8x2\" (UniqueName: \"kubernetes.io/projected/826b547a-5534-4c11-83b5-f09a5d93e6c0-kube-api-access-7n8x2\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.570153 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-config-data\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.570577 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-scripts\") pod \"ceilometer-0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " pod="openstack/ceilometer-0" Nov 28 17:23:09 crc kubenswrapper[5024]: I1128 17:23:09.736449 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:23:10 crc kubenswrapper[5024]: I1128 17:23:10.026814 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:10 crc kubenswrapper[5024]: W1128 17:23:10.042675 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod848a2d6b_0e0f_4c1c_bed2_9f9f06d7beb9.slice/crio-0845d3b9d9185ed5360d506e809b0f88dfaeafd0852347e01e4c612fa327de22 WatchSource:0}: Error finding container 0845d3b9d9185ed5360d506e809b0f88dfaeafd0852347e01e4c612fa327de22: Status 404 returned error can't find the container with id 0845d3b9d9185ed5360d506e809b0f88dfaeafd0852347e01e4c612fa327de22 Nov 28 17:23:10 crc kubenswrapper[5024]: I1128 17:23:10.238146 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 17:23:10 crc kubenswrapper[5024]: I1128 17:23:10.321102 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:10 crc kubenswrapper[5024]: I1128 17:23:10.327987 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9","Type":"ContainerStarted","Data":"0845d3b9d9185ed5360d506e809b0f88dfaeafd0852347e01e4c612fa327de22"} Nov 28 17:23:10 crc kubenswrapper[5024]: W1128 17:23:10.331981 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod826b547a_5534_4c11_83b5_f09a5d93e6c0.slice/crio-db4483a4d8363ecab0178329624848cb456ba203790f34d300906e26dab944e3 WatchSource:0}: Error finding container db4483a4d8363ecab0178329624848cb456ba203790f34d300906e26dab944e3: Status 404 returned error can't find the container with id db4483a4d8363ecab0178329624848cb456ba203790f34d300906e26dab944e3 Nov 28 17:23:10 crc kubenswrapper[5024]: I1128 17:23:10.332387 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1435cd0-7a59-45be-9658-d875edd55a7f","Type":"ContainerStarted","Data":"cfd1fe099070594ae6d1d27eac655e81370e8102945e843cf4da33af61162ce0"} Nov 28 17:23:10 crc kubenswrapper[5024]: I1128 17:23:10.514331 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01acb9ec-ac92-403c-a3fc-fcbf0e3b800a" path="/var/lib/kubelet/pods/01acb9ec-ac92-403c-a3fc-fcbf0e3b800a/volumes" Nov 28 17:23:10 crc kubenswrapper[5024]: I1128 17:23:10.515522 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46253c13-9836-4929-8fdd-a2ce0060f149" path="/var/lib/kubelet/pods/46253c13-9836-4929-8fdd-a2ce0060f149/volumes" Nov 28 17:23:10 crc kubenswrapper[5024]: I1128 17:23:10.516698 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda" path="/var/lib/kubelet/pods/f10f2edb-7e7a-4b8f-8eb3-ebc2cb820eda/volumes" Nov 28 17:23:10 crc kubenswrapper[5024]: I1128 17:23:10.527637 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 17:23:10 crc kubenswrapper[5024]: I1128 17:23:10.573801 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:23:11 crc kubenswrapper[5024]: I1128 17:23:11.347723 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9","Type":"ContainerStarted","Data":"cbe91c1748938d607ee44052b70c7a93c5bb58ef06660b54026ef1ad41c2a9a1"} Nov 28 17:23:11 crc kubenswrapper[5024]: I1128 17:23:11.349359 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"826b547a-5534-4c11-83b5-f09a5d93e6c0","Type":"ContainerStarted","Data":"db4483a4d8363ecab0178329624848cb456ba203790f34d300906e26dab944e3"} Nov 28 17:23:11 crc kubenswrapper[5024]: I1128 17:23:11.352422 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1435cd0-7a59-45be-9658-d875edd55a7f","Type":"ContainerStarted","Data":"9b0e6c32635601ba4854b72f8b26a5a80b5fff1d851b0f3e383fdee73f69a080"} Nov 28 17:23:11 crc kubenswrapper[5024]: I1128 17:23:11.352495 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c6b35e94-ac6f-43de-8b71-9785ed09145f" containerName="cinder-scheduler" containerID="cri-o://cbb52da957ee3d0eece9f1affadb365e5633b75888ca09b0df71d18f3fcad333" gracePeriod=30 Nov 28 17:23:11 crc kubenswrapper[5024]: I1128 17:23:11.352634 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c6b35e94-ac6f-43de-8b71-9785ed09145f" containerName="probe" containerID="cri-o://2061e7ab8054e52669271ff1351570785882c50d57075145d94542d03e625673" gracePeriod=30 Nov 28 17:23:11 crc kubenswrapper[5024]: I1128 17:23:11.394454 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.394430762 podStartE2EDuration="4.394430762s" podCreationTimestamp="2025-11-28 17:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:11.387892106 +0000 UTC m=+1493.436813011" watchObservedRunningTime="2025-11-28 17:23:11.394430762 +0000 UTC m=+1493.443351667" Nov 28 17:23:11 crc kubenswrapper[5024]: I1128 17:23:11.732184 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:11 crc kubenswrapper[5024]: I1128 17:23:11.863361 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff8449c8c-r68zx"] Nov 28 17:23:11 crc kubenswrapper[5024]: I1128 17:23:11.863598 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" podUID="c2845fcb-6cd4-46e4-b335-e319078d7ae8" containerName="dnsmasq-dns" containerID="cri-o://ad22e31f9cb9931334e950172830f3c2556e9a12c87cd4c930e45d6ff3f40f57" gracePeriod=10 Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.050406 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56d8644854-9v4h9" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.141451 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7ddf475b78-4qwq7"] Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.142377 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7ddf475b78-4qwq7" podUID="de836dec-b7a6-45f0-8d8b-4d29e024e1d7" containerName="barbican-api" containerID="cri-o://b4eb1e043a66fec0814dfe06d769605299d7d3c61dc63522bee22b8fcab5416d" gracePeriod=30 Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.143260 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7ddf475b78-4qwq7" podUID="de836dec-b7a6-45f0-8d8b-4d29e024e1d7" containerName="barbican-api-log" containerID="cri-o://c045bbefb12ce6c914b84f9232a1c5677fc0219eb3a5bfb6be4398fbf4eb89c8" gracePeriod=30 Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.381415 5024 generic.go:334] "Generic (PLEG): container finished" podID="c6b35e94-ac6f-43de-8b71-9785ed09145f" containerID="2061e7ab8054e52669271ff1351570785882c50d57075145d94542d03e625673" exitCode=0 Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.381482 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c6b35e94-ac6f-43de-8b71-9785ed09145f","Type":"ContainerDied","Data":"2061e7ab8054e52669271ff1351570785882c50d57075145d94542d03e625673"} Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.385773 5024 generic.go:334] "Generic (PLEG): container finished" podID="de836dec-b7a6-45f0-8d8b-4d29e024e1d7" containerID="c045bbefb12ce6c914b84f9232a1c5677fc0219eb3a5bfb6be4398fbf4eb89c8" exitCode=143 Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.386171 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ddf475b78-4qwq7" event={"ID":"de836dec-b7a6-45f0-8d8b-4d29e024e1d7","Type":"ContainerDied","Data":"c045bbefb12ce6c914b84f9232a1c5677fc0219eb3a5bfb6be4398fbf4eb89c8"} Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.389257 5024 generic.go:334] "Generic (PLEG): container finished" podID="c2845fcb-6cd4-46e4-b335-e319078d7ae8" containerID="ad22e31f9cb9931334e950172830f3c2556e9a12c87cd4c930e45d6ff3f40f57" exitCode=0 Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.389319 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" event={"ID":"c2845fcb-6cd4-46e4-b335-e319078d7ae8","Type":"ContainerDied","Data":"ad22e31f9cb9931334e950172830f3c2556e9a12c87cd4c930e45d6ff3f40f57"} Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.410940 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9","Type":"ContainerStarted","Data":"56273b277d8c4665b45584e9d4e26184d157497f5cef2001a174116cde8851f8"} Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.421946 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"826b547a-5534-4c11-83b5-f09a5d93e6c0","Type":"ContainerStarted","Data":"0bb475ce26086a657e3b2554e7e4a9de5919013fec482f54b26765bd424f8b92"} Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.446761 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.446737236 podStartE2EDuration="4.446737236s" podCreationTimestamp="2025-11-28 17:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:12.437110613 +0000 UTC m=+1494.486031518" watchObservedRunningTime="2025-11-28 17:23:12.446737236 +0000 UTC m=+1494.495658141" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.561044 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.667703 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-dns-svc\") pod \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.669073 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-dns-swift-storage-0\") pod \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.669174 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d48mv\" (UniqueName: \"kubernetes.io/projected/c2845fcb-6cd4-46e4-b335-e319078d7ae8-kube-api-access-d48mv\") pod \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.669267 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-ovsdbserver-sb\") pod \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.669390 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-ovsdbserver-nb\") pod \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.669473 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-config\") pod \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\" (UID: \"c2845fcb-6cd4-46e4-b335-e319078d7ae8\") " Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.674936 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2845fcb-6cd4-46e4-b335-e319078d7ae8-kube-api-access-d48mv" (OuterVolumeSpecName: "kube-api-access-d48mv") pod "c2845fcb-6cd4-46e4-b335-e319078d7ae8" (UID: "c2845fcb-6cd4-46e4-b335-e319078d7ae8"). InnerVolumeSpecName "kube-api-access-d48mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.776590 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d48mv\" (UniqueName: \"kubernetes.io/projected/c2845fcb-6cd4-46e4-b335-e319078d7ae8-kube-api-access-d48mv\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.777150 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c2845fcb-6cd4-46e4-b335-e319078d7ae8" (UID: "c2845fcb-6cd4-46e4-b335-e319078d7ae8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.781553 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c2845fcb-6cd4-46e4-b335-e319078d7ae8" (UID: "c2845fcb-6cd4-46e4-b335-e319078d7ae8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.799205 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c2845fcb-6cd4-46e4-b335-e319078d7ae8" (UID: "c2845fcb-6cd4-46e4-b335-e319078d7ae8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.819010 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-config" (OuterVolumeSpecName: "config") pod "c2845fcb-6cd4-46e4-b335-e319078d7ae8" (UID: "c2845fcb-6cd4-46e4-b335-e319078d7ae8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.826339 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2845fcb-6cd4-46e4-b335-e319078d7ae8" (UID: "c2845fcb-6cd4-46e4-b335-e319078d7ae8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.827825 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.881248 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.881518 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.881664 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.881758 5024 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.881832 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2845fcb-6cd4-46e4-b335-e319078d7ae8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.983080 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-config-data-custom\") pod \"c6b35e94-ac6f-43de-8b71-9785ed09145f\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.983288 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6b35e94-ac6f-43de-8b71-9785ed09145f-etc-machine-id\") pod \"c6b35e94-ac6f-43de-8b71-9785ed09145f\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.983417 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-combined-ca-bundle\") pod \"c6b35e94-ac6f-43de-8b71-9785ed09145f\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.983449 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-scripts\") pod \"c6b35e94-ac6f-43de-8b71-9785ed09145f\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.983538 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crrmr\" (UniqueName: \"kubernetes.io/projected/c6b35e94-ac6f-43de-8b71-9785ed09145f-kube-api-access-crrmr\") pod \"c6b35e94-ac6f-43de-8b71-9785ed09145f\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.983612 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-config-data\") pod \"c6b35e94-ac6f-43de-8b71-9785ed09145f\" (UID: \"c6b35e94-ac6f-43de-8b71-9785ed09145f\") " Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.986425 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6b35e94-ac6f-43de-8b71-9785ed09145f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c6b35e94-ac6f-43de-8b71-9785ed09145f" (UID: "c6b35e94-ac6f-43de-8b71-9785ed09145f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.988204 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c6b35e94-ac6f-43de-8b71-9785ed09145f" (UID: "c6b35e94-ac6f-43de-8b71-9785ed09145f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.989910 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-scripts" (OuterVolumeSpecName: "scripts") pod "c6b35e94-ac6f-43de-8b71-9785ed09145f" (UID: "c6b35e94-ac6f-43de-8b71-9785ed09145f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:12 crc kubenswrapper[5024]: I1128 17:23:12.990223 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6b35e94-ac6f-43de-8b71-9785ed09145f-kube-api-access-crrmr" (OuterVolumeSpecName: "kube-api-access-crrmr") pod "c6b35e94-ac6f-43de-8b71-9785ed09145f" (UID: "c6b35e94-ac6f-43de-8b71-9785ed09145f"). InnerVolumeSpecName "kube-api-access-crrmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.044008 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6b35e94-ac6f-43de-8b71-9785ed09145f" (UID: "c6b35e94-ac6f-43de-8b71-9785ed09145f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.093898 5024 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6b35e94-ac6f-43de-8b71-9785ed09145f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.093929 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.093939 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.093962 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crrmr\" (UniqueName: \"kubernetes.io/projected/c6b35e94-ac6f-43de-8b71-9785ed09145f-kube-api-access-crrmr\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.093975 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.114333 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-config-data" (OuterVolumeSpecName: "config-data") pod "c6b35e94-ac6f-43de-8b71-9785ed09145f" (UID: "c6b35e94-ac6f-43de-8b71-9785ed09145f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.197820 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b35e94-ac6f-43de-8b71-9785ed09145f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.442447 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.442429 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8449c8c-r68zx" event={"ID":"c2845fcb-6cd4-46e4-b335-e319078d7ae8","Type":"ContainerDied","Data":"4cc13020ac971a2c5ec2c07ae92013420a06a64fb1274c5bafa77459c8c396ea"} Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.442626 5024 scope.go:117] "RemoveContainer" containerID="ad22e31f9cb9931334e950172830f3c2556e9a12c87cd4c930e45d6ff3f40f57" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.449051 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"826b547a-5534-4c11-83b5-f09a5d93e6c0","Type":"ContainerStarted","Data":"5e06065ce6d7b2c1f85ff98da035c9dc824ed8fd519c3136f7fd10e99950a85b"} Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.456410 5024 generic.go:334] "Generic (PLEG): container finished" podID="c6b35e94-ac6f-43de-8b71-9785ed09145f" containerID="cbb52da957ee3d0eece9f1affadb365e5633b75888ca09b0df71d18f3fcad333" exitCode=0 Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.457279 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.462342 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c6b35e94-ac6f-43de-8b71-9785ed09145f","Type":"ContainerDied","Data":"cbb52da957ee3d0eece9f1affadb365e5633b75888ca09b0df71d18f3fcad333"} Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.462410 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c6b35e94-ac6f-43de-8b71-9785ed09145f","Type":"ContainerDied","Data":"f4553f2cdb8afcfcac0783ae01626e4feeba14fbcfc834b454e4225d049b95c2"} Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.479179 5024 scope.go:117] "RemoveContainer" containerID="d9d8f6077f1c355cf69225c0f1b1ef68e7e2e69842d9e4fec2743ff52467c769" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.543181 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff8449c8c-r68zx"] Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.561439 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ff8449c8c-r68zx"] Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.564110 5024 scope.go:117] "RemoveContainer" containerID="2061e7ab8054e52669271ff1351570785882c50d57075145d94542d03e625673" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.573631 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.586105 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.599614 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:23:13 crc kubenswrapper[5024]: E1128 17:23:13.600207 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2845fcb-6cd4-46e4-b335-e319078d7ae8" containerName="dnsmasq-dns" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.600229 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2845fcb-6cd4-46e4-b335-e319078d7ae8" containerName="dnsmasq-dns" Nov 28 17:23:13 crc kubenswrapper[5024]: E1128 17:23:13.600261 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b35e94-ac6f-43de-8b71-9785ed09145f" containerName="probe" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.600270 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b35e94-ac6f-43de-8b71-9785ed09145f" containerName="probe" Nov 28 17:23:13 crc kubenswrapper[5024]: E1128 17:23:13.600300 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b35e94-ac6f-43de-8b71-9785ed09145f" containerName="cinder-scheduler" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.600309 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b35e94-ac6f-43de-8b71-9785ed09145f" containerName="cinder-scheduler" Nov 28 17:23:13 crc kubenswrapper[5024]: E1128 17:23:13.600321 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2845fcb-6cd4-46e4-b335-e319078d7ae8" containerName="init" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.600327 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2845fcb-6cd4-46e4-b335-e319078d7ae8" containerName="init" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.600579 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b35e94-ac6f-43de-8b71-9785ed09145f" containerName="probe" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.600610 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2845fcb-6cd4-46e4-b335-e319078d7ae8" containerName="dnsmasq-dns" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.600636 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b35e94-ac6f-43de-8b71-9785ed09145f" containerName="cinder-scheduler" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.602168 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.610477 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.612576 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.630361 5024 scope.go:117] "RemoveContainer" containerID="cbb52da957ee3d0eece9f1affadb365e5633b75888ca09b0df71d18f3fcad333" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.663480 5024 scope.go:117] "RemoveContainer" containerID="2061e7ab8054e52669271ff1351570785882c50d57075145d94542d03e625673" Nov 28 17:23:13 crc kubenswrapper[5024]: E1128 17:23:13.663828 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2061e7ab8054e52669271ff1351570785882c50d57075145d94542d03e625673\": container with ID starting with 2061e7ab8054e52669271ff1351570785882c50d57075145d94542d03e625673 not found: ID does not exist" containerID="2061e7ab8054e52669271ff1351570785882c50d57075145d94542d03e625673" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.663864 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2061e7ab8054e52669271ff1351570785882c50d57075145d94542d03e625673"} err="failed to get container status \"2061e7ab8054e52669271ff1351570785882c50d57075145d94542d03e625673\": rpc error: code = NotFound desc = could not find container \"2061e7ab8054e52669271ff1351570785882c50d57075145d94542d03e625673\": container with ID starting with 2061e7ab8054e52669271ff1351570785882c50d57075145d94542d03e625673 not found: ID does not exist" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.663887 5024 scope.go:117] "RemoveContainer" containerID="cbb52da957ee3d0eece9f1affadb365e5633b75888ca09b0df71d18f3fcad333" Nov 28 17:23:13 crc kubenswrapper[5024]: E1128 17:23:13.664258 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbb52da957ee3d0eece9f1affadb365e5633b75888ca09b0df71d18f3fcad333\": container with ID starting with cbb52da957ee3d0eece9f1affadb365e5633b75888ca09b0df71d18f3fcad333 not found: ID does not exist" containerID="cbb52da957ee3d0eece9f1affadb365e5633b75888ca09b0df71d18f3fcad333" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.664284 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbb52da957ee3d0eece9f1affadb365e5633b75888ca09b0df71d18f3fcad333"} err="failed to get container status \"cbb52da957ee3d0eece9f1affadb365e5633b75888ca09b0df71d18f3fcad333\": rpc error: code = NotFound desc = could not find container \"cbb52da957ee3d0eece9f1affadb365e5633b75888ca09b0df71d18f3fcad333\": container with ID starting with cbb52da957ee3d0eece9f1affadb365e5633b75888ca09b0df71d18f3fcad333 not found: ID does not exist" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.688947 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.716018 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.716181 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.716205 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68qvz\" (UniqueName: \"kubernetes.io/projected/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-kube-api-access-68qvz\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.716266 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.716290 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.716356 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.818363 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.818457 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.818477 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68qvz\" (UniqueName: \"kubernetes.io/projected/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-kube-api-access-68qvz\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.818520 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.818538 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.818586 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.818859 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.822648 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.823350 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.823634 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.824046 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.835598 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68qvz\" (UniqueName: \"kubernetes.io/projected/9d67a24d-c44f-46a8-b24c-ac9ddb765f0f-kube-api-access-68qvz\") pod \"cinder-scheduler-0\" (UID: \"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f\") " pod="openstack/cinder-scheduler-0" Nov 28 17:23:13 crc kubenswrapper[5024]: I1128 17:23:13.938412 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 17:23:14 crc kubenswrapper[5024]: I1128 17:23:14.400277 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:23:14 crc kubenswrapper[5024]: W1128 17:23:14.409132 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d67a24d_c44f_46a8_b24c_ac9ddb765f0f.slice/crio-3c934a0dc93e6e958ffd3478cecb689788aa5fb41db80d60efb35818e031092e WatchSource:0}: Error finding container 3c934a0dc93e6e958ffd3478cecb689788aa5fb41db80d60efb35818e031092e: Status 404 returned error can't find the container with id 3c934a0dc93e6e958ffd3478cecb689788aa5fb41db80d60efb35818e031092e Nov 28 17:23:14 crc kubenswrapper[5024]: I1128 17:23:14.474105 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"826b547a-5534-4c11-83b5-f09a5d93e6c0","Type":"ContainerStarted","Data":"715ed801f1e596f73d0112b81e786911311908f9e8d1751465ac8ba6857ef75e"} Nov 28 17:23:14 crc kubenswrapper[5024]: I1128 17:23:14.476605 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f","Type":"ContainerStarted","Data":"3c934a0dc93e6e958ffd3478cecb689788aa5fb41db80d60efb35818e031092e"} Nov 28 17:23:14 crc kubenswrapper[5024]: I1128 17:23:14.524725 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2845fcb-6cd4-46e4-b335-e319078d7ae8" path="/var/lib/kubelet/pods/c2845fcb-6cd4-46e4-b335-e319078d7ae8/volumes" Nov 28 17:23:14 crc kubenswrapper[5024]: I1128 17:23:14.526732 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6b35e94-ac6f-43de-8b71-9785ed09145f" path="/var/lib/kubelet/pods/c6b35e94-ac6f-43de-8b71-9785ed09145f/volumes" Nov 28 17:23:15 crc kubenswrapper[5024]: I1128 17:23:15.520879 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f","Type":"ContainerStarted","Data":"7285cd1dd4f2c08714176d969bebd1f40c6468126795ca0848be23787734eb66"} Nov 28 17:23:15 crc kubenswrapper[5024]: I1128 17:23:15.546061 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"826b547a-5534-4c11-83b5-f09a5d93e6c0","Type":"ContainerStarted","Data":"f6dae8af7c9bd65f23883ec145b935412c7390547cdac3b8cd42e249faf851da"} Nov 28 17:23:15 crc kubenswrapper[5024]: I1128 17:23:15.547109 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:23:15 crc kubenswrapper[5024]: I1128 17:23:15.581594 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.811732064 podStartE2EDuration="6.581573895s" podCreationTimestamp="2025-11-28 17:23:09 +0000 UTC" firstStartedPulling="2025-11-28 17:23:10.334959303 +0000 UTC m=+1492.383880208" lastFinishedPulling="2025-11-28 17:23:15.104801134 +0000 UTC m=+1497.153722039" observedRunningTime="2025-11-28 17:23:15.576233054 +0000 UTC m=+1497.625153959" watchObservedRunningTime="2025-11-28 17:23:15.581573895 +0000 UTC m=+1497.630494800" Nov 28 17:23:15 crc kubenswrapper[5024]: I1128 17:23:15.983644 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:23:15 crc kubenswrapper[5024]: I1128 17:23:15.987251 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5dc99dc88d-6bdv9" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.167784 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.283244 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-config-data\") pod \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.283400 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr75h\" (UniqueName: \"kubernetes.io/projected/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-kube-api-access-hr75h\") pod \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.283520 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-combined-ca-bundle\") pod \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.283668 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-logs\") pod \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.283715 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-config-data-custom\") pod \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\" (UID: \"de836dec-b7a6-45f0-8d8b-4d29e024e1d7\") " Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.284237 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-logs" (OuterVolumeSpecName: "logs") pod "de836dec-b7a6-45f0-8d8b-4d29e024e1d7" (UID: "de836dec-b7a6-45f0-8d8b-4d29e024e1d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.284507 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.300902 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-kube-api-access-hr75h" (OuterVolumeSpecName: "kube-api-access-hr75h") pod "de836dec-b7a6-45f0-8d8b-4d29e024e1d7" (UID: "de836dec-b7a6-45f0-8d8b-4d29e024e1d7"). InnerVolumeSpecName "kube-api-access-hr75h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.304373 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "de836dec-b7a6-45f0-8d8b-4d29e024e1d7" (UID: "de836dec-b7a6-45f0-8d8b-4d29e024e1d7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.331197 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de836dec-b7a6-45f0-8d8b-4d29e024e1d7" (UID: "de836dec-b7a6-45f0-8d8b-4d29e024e1d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.388176 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-config-data" (OuterVolumeSpecName: "config-data") pod "de836dec-b7a6-45f0-8d8b-4d29e024e1d7" (UID: "de836dec-b7a6-45f0-8d8b-4d29e024e1d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.388643 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.388675 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.388685 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.388694 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr75h\" (UniqueName: \"kubernetes.io/projected/de836dec-b7a6-45f0-8d8b-4d29e024e1d7-kube-api-access-hr75h\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.576200 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d67a24d-c44f-46a8-b24c-ac9ddb765f0f","Type":"ContainerStarted","Data":"3c0434701e68c543f102fc1d62fa74d83f777fed14654da68f94d5acc52976e7"} Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.580541 5024 generic.go:334] "Generic (PLEG): container finished" podID="de836dec-b7a6-45f0-8d8b-4d29e024e1d7" containerID="b4eb1e043a66fec0814dfe06d769605299d7d3c61dc63522bee22b8fcab5416d" exitCode=0 Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.582067 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ddf475b78-4qwq7" event={"ID":"de836dec-b7a6-45f0-8d8b-4d29e024e1d7","Type":"ContainerDied","Data":"b4eb1e043a66fec0814dfe06d769605299d7d3c61dc63522bee22b8fcab5416d"} Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.582111 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ddf475b78-4qwq7" event={"ID":"de836dec-b7a6-45f0-8d8b-4d29e024e1d7","Type":"ContainerDied","Data":"f52f6b114f5d0ef54e2038d403cd23a2ebfeb9005c7dc46e7b1bc640c7a4133b"} Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.582199 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7ddf475b78-4qwq7" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.582238 5024 scope.go:117] "RemoveContainer" containerID="b4eb1e043a66fec0814dfe06d769605299d7d3c61dc63522bee22b8fcab5416d" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.693270 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.693255315 podStartE2EDuration="3.693255315s" podCreationTimestamp="2025-11-28 17:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:16.615490149 +0000 UTC m=+1498.664411054" watchObservedRunningTime="2025-11-28 17:23:16.693255315 +0000 UTC m=+1498.742176220" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.697910 5024 scope.go:117] "RemoveContainer" containerID="c045bbefb12ce6c914b84f9232a1c5677fc0219eb3a5bfb6be4398fbf4eb89c8" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.707099 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7ddf475b78-4qwq7"] Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.719913 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7ddf475b78-4qwq7"] Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.852977 5024 scope.go:117] "RemoveContainer" containerID="b4eb1e043a66fec0814dfe06d769605299d7d3c61dc63522bee22b8fcab5416d" Nov 28 17:23:16 crc kubenswrapper[5024]: E1128 17:23:16.857402 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4eb1e043a66fec0814dfe06d769605299d7d3c61dc63522bee22b8fcab5416d\": container with ID starting with b4eb1e043a66fec0814dfe06d769605299d7d3c61dc63522bee22b8fcab5416d not found: ID does not exist" containerID="b4eb1e043a66fec0814dfe06d769605299d7d3c61dc63522bee22b8fcab5416d" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.857439 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4eb1e043a66fec0814dfe06d769605299d7d3c61dc63522bee22b8fcab5416d"} err="failed to get container status \"b4eb1e043a66fec0814dfe06d769605299d7d3c61dc63522bee22b8fcab5416d\": rpc error: code = NotFound desc = could not find container \"b4eb1e043a66fec0814dfe06d769605299d7d3c61dc63522bee22b8fcab5416d\": container with ID starting with b4eb1e043a66fec0814dfe06d769605299d7d3c61dc63522bee22b8fcab5416d not found: ID does not exist" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.857459 5024 scope.go:117] "RemoveContainer" containerID="c045bbefb12ce6c914b84f9232a1c5677fc0219eb3a5bfb6be4398fbf4eb89c8" Nov 28 17:23:16 crc kubenswrapper[5024]: E1128 17:23:16.857665 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c045bbefb12ce6c914b84f9232a1c5677fc0219eb3a5bfb6be4398fbf4eb89c8\": container with ID starting with c045bbefb12ce6c914b84f9232a1c5677fc0219eb3a5bfb6be4398fbf4eb89c8 not found: ID does not exist" containerID="c045bbefb12ce6c914b84f9232a1c5677fc0219eb3a5bfb6be4398fbf4eb89c8" Nov 28 17:23:16 crc kubenswrapper[5024]: I1128 17:23:16.857681 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c045bbefb12ce6c914b84f9232a1c5677fc0219eb3a5bfb6be4398fbf4eb89c8"} err="failed to get container status \"c045bbefb12ce6c914b84f9232a1c5677fc0219eb3a5bfb6be4398fbf4eb89c8\": rpc error: code = NotFound desc = could not find container \"c045bbefb12ce6c914b84f9232a1c5677fc0219eb3a5bfb6be4398fbf4eb89c8\": container with ID starting with c045bbefb12ce6c914b84f9232a1c5677fc0219eb3a5bfb6be4398fbf4eb89c8 not found: ID does not exist" Nov 28 17:23:17 crc kubenswrapper[5024]: I1128 17:23:17.160535 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-54f6ccfc5c-rvfhm" Nov 28 17:23:17 crc kubenswrapper[5024]: I1128 17:23:17.965805 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 17:23:17 crc kubenswrapper[5024]: I1128 17:23:17.966175 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.000931 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.011665 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.316144 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 28 17:23:18 crc kubenswrapper[5024]: E1128 17:23:18.316678 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de836dec-b7a6-45f0-8d8b-4d29e024e1d7" containerName="barbican-api-log" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.316703 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="de836dec-b7a6-45f0-8d8b-4d29e024e1d7" containerName="barbican-api-log" Nov 28 17:23:18 crc kubenswrapper[5024]: E1128 17:23:18.316747 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de836dec-b7a6-45f0-8d8b-4d29e024e1d7" containerName="barbican-api" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.316757 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="de836dec-b7a6-45f0-8d8b-4d29e024e1d7" containerName="barbican-api" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.316991 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="de836dec-b7a6-45f0-8d8b-4d29e024e1d7" containerName="barbican-api" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.317002 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="de836dec-b7a6-45f0-8d8b-4d29e024e1d7" containerName="barbican-api-log" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.318064 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.320350 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cxl5z" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.320525 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.320672 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.350189 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.443012 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/59a27079-b0f6-49dd-8b5e-516096f3d0e8-openstack-config-secret\") pod \"openstackclient\" (UID: \"59a27079-b0f6-49dd-8b5e-516096f3d0e8\") " pod="openstack/openstackclient" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.443145 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a27079-b0f6-49dd-8b5e-516096f3d0e8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"59a27079-b0f6-49dd-8b5e-516096f3d0e8\") " pod="openstack/openstackclient" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.443231 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8kcx\" (UniqueName: \"kubernetes.io/projected/59a27079-b0f6-49dd-8b5e-516096f3d0e8-kube-api-access-d8kcx\") pod \"openstackclient\" (UID: \"59a27079-b0f6-49dd-8b5e-516096f3d0e8\") " pod="openstack/openstackclient" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.443546 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/59a27079-b0f6-49dd-8b5e-516096f3d0e8-openstack-config\") pod \"openstackclient\" (UID: \"59a27079-b0f6-49dd-8b5e-516096f3d0e8\") " pod="openstack/openstackclient" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.525904 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de836dec-b7a6-45f0-8d8b-4d29e024e1d7" path="/var/lib/kubelet/pods/de836dec-b7a6-45f0-8d8b-4d29e024e1d7/volumes" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.545593 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/59a27079-b0f6-49dd-8b5e-516096f3d0e8-openstack-config-secret\") pod \"openstackclient\" (UID: \"59a27079-b0f6-49dd-8b5e-516096f3d0e8\") " pod="openstack/openstackclient" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.545713 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a27079-b0f6-49dd-8b5e-516096f3d0e8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"59a27079-b0f6-49dd-8b5e-516096f3d0e8\") " pod="openstack/openstackclient" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.545752 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8kcx\" (UniqueName: \"kubernetes.io/projected/59a27079-b0f6-49dd-8b5e-516096f3d0e8-kube-api-access-d8kcx\") pod \"openstackclient\" (UID: \"59a27079-b0f6-49dd-8b5e-516096f3d0e8\") " pod="openstack/openstackclient" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.545818 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/59a27079-b0f6-49dd-8b5e-516096f3d0e8-openstack-config\") pod \"openstackclient\" (UID: \"59a27079-b0f6-49dd-8b5e-516096f3d0e8\") " pod="openstack/openstackclient" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.548896 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.548972 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.557126 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/59a27079-b0f6-49dd-8b5e-516096f3d0e8-openstack-config\") pod \"openstackclient\" (UID: \"59a27079-b0f6-49dd-8b5e-516096f3d0e8\") " pod="openstack/openstackclient" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.560917 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/59a27079-b0f6-49dd-8b5e-516096f3d0e8-openstack-config-secret\") pod \"openstackclient\" (UID: \"59a27079-b0f6-49dd-8b5e-516096f3d0e8\") " pod="openstack/openstackclient" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.565437 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8kcx\" (UniqueName: \"kubernetes.io/projected/59a27079-b0f6-49dd-8b5e-516096f3d0e8-kube-api-access-d8kcx\") pod \"openstackclient\" (UID: \"59a27079-b0f6-49dd-8b5e-516096f3d0e8\") " pod="openstack/openstackclient" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.569640 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a27079-b0f6-49dd-8b5e-516096f3d0e8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"59a27079-b0f6-49dd-8b5e-516096f3d0e8\") " pod="openstack/openstackclient" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.604128 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.604183 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.642334 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cxl5z" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.650460 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 17:23:18 crc kubenswrapper[5024]: I1128 17:23:18.940155 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 17:23:19 crc kubenswrapper[5024]: I1128 17:23:19.160554 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 17:23:19 crc kubenswrapper[5024]: I1128 17:23:19.326921 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:19 crc kubenswrapper[5024]: I1128 17:23:19.327315 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:19 crc kubenswrapper[5024]: I1128 17:23:19.363449 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:19 crc kubenswrapper[5024]: I1128 17:23:19.374081 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:19 crc kubenswrapper[5024]: I1128 17:23:19.615974 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"59a27079-b0f6-49dd-8b5e-516096f3d0e8","Type":"ContainerStarted","Data":"03ca37ee31f1fbe70f36a67f0185e2709586f3850b8fc33945e4b64e26d39038"} Nov 28 17:23:19 crc kubenswrapper[5024]: I1128 17:23:19.617006 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:19 crc kubenswrapper[5024]: I1128 17:23:19.617156 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:21 crc kubenswrapper[5024]: I1128 17:23:21.500176 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 17:23:21 crc kubenswrapper[5024]: I1128 17:23:21.500580 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:23:21 crc kubenswrapper[5024]: I1128 17:23:21.530111 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 17:23:21 crc kubenswrapper[5024]: I1128 17:23:21.709465 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:23:21 crc kubenswrapper[5024]: I1128 17:23:21.709756 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:23:22 crc kubenswrapper[5024]: I1128 17:23:22.106072 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:22 crc kubenswrapper[5024]: I1128 17:23:22.108430 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.457782 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-74757657c9-s2n28"] Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.460040 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.464639 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.465539 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.465635 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.479050 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-74757657c9-s2n28"] Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.558015 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/634068c7-593f-43ee-8b4e-4be8f66c51c5-log-httpd\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.558486 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/634068c7-593f-43ee-8b4e-4be8f66c51c5-config-data\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.558610 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/634068c7-593f-43ee-8b4e-4be8f66c51c5-internal-tls-certs\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.558802 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/634068c7-593f-43ee-8b4e-4be8f66c51c5-etc-swift\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.558938 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mb7z\" (UniqueName: \"kubernetes.io/projected/634068c7-593f-43ee-8b4e-4be8f66c51c5-kube-api-access-6mb7z\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.559052 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634068c7-593f-43ee-8b4e-4be8f66c51c5-combined-ca-bundle\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.559442 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/634068c7-593f-43ee-8b4e-4be8f66c51c5-run-httpd\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.559533 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/634068c7-593f-43ee-8b4e-4be8f66c51c5-public-tls-certs\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.661272 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/634068c7-593f-43ee-8b4e-4be8f66c51c5-public-tls-certs\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.661338 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/634068c7-593f-43ee-8b4e-4be8f66c51c5-log-httpd\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.661416 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/634068c7-593f-43ee-8b4e-4be8f66c51c5-config-data\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.661456 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/634068c7-593f-43ee-8b4e-4be8f66c51c5-internal-tls-certs\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.661521 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/634068c7-593f-43ee-8b4e-4be8f66c51c5-etc-swift\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.661573 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mb7z\" (UniqueName: \"kubernetes.io/projected/634068c7-593f-43ee-8b4e-4be8f66c51c5-kube-api-access-6mb7z\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.661595 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634068c7-593f-43ee-8b4e-4be8f66c51c5-combined-ca-bundle\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.661631 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/634068c7-593f-43ee-8b4e-4be8f66c51c5-run-httpd\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.662065 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/634068c7-593f-43ee-8b4e-4be8f66c51c5-run-httpd\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.662677 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/634068c7-593f-43ee-8b4e-4be8f66c51c5-log-httpd\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.668636 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634068c7-593f-43ee-8b4e-4be8f66c51c5-combined-ca-bundle\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.672259 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/634068c7-593f-43ee-8b4e-4be8f66c51c5-config-data\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.673133 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/634068c7-593f-43ee-8b4e-4be8f66c51c5-internal-tls-certs\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.678793 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/634068c7-593f-43ee-8b4e-4be8f66c51c5-public-tls-certs\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.687152 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/634068c7-593f-43ee-8b4e-4be8f66c51c5-etc-swift\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.691958 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mb7z\" (UniqueName: \"kubernetes.io/projected/634068c7-593f-43ee-8b4e-4be8f66c51c5-kube-api-access-6mb7z\") pod \"swift-proxy-74757657c9-s2n28\" (UID: \"634068c7-593f-43ee-8b4e-4be8f66c51c5\") " pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:23 crc kubenswrapper[5024]: I1128 17:23:23.792939 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:24 crc kubenswrapper[5024]: I1128 17:23:24.345266 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 17:23:24 crc kubenswrapper[5024]: W1128 17:23:24.518639 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod634068c7_593f_43ee_8b4e_4be8f66c51c5.slice/crio-b6e53fddaec76a2f910a9acdf29f90d2b5de6d76589b5e978b57f91717c47bbf WatchSource:0}: Error finding container b6e53fddaec76a2f910a9acdf29f90d2b5de6d76589b5e978b57f91717c47bbf: Status 404 returned error can't find the container with id b6e53fddaec76a2f910a9acdf29f90d2b5de6d76589b5e978b57f91717c47bbf Nov 28 17:23:24 crc kubenswrapper[5024]: I1128 17:23:24.532378 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-74757657c9-s2n28"] Nov 28 17:23:24 crc kubenswrapper[5024]: I1128 17:23:24.799577 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-74757657c9-s2n28" event={"ID":"634068c7-593f-43ee-8b4e-4be8f66c51c5","Type":"ContainerStarted","Data":"b6e53fddaec76a2f910a9acdf29f90d2b5de6d76589b5e978b57f91717c47bbf"} Nov 28 17:23:25 crc kubenswrapper[5024]: I1128 17:23:25.815462 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-74757657c9-s2n28" event={"ID":"634068c7-593f-43ee-8b4e-4be8f66c51c5","Type":"ContainerStarted","Data":"9212587c06dd60f7ac7e3e91f20cd40aa6fb4156015253df463f7c63d0a7c2bc"} Nov 28 17:23:25 crc kubenswrapper[5024]: I1128 17:23:25.816044 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:25 crc kubenswrapper[5024]: I1128 17:23:25.816058 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-74757657c9-s2n28" event={"ID":"634068c7-593f-43ee-8b4e-4be8f66c51c5","Type":"ContainerStarted","Data":"79050b46e8ac39b8d1f38fa77ddb43cc50c5cbb4da12aaa260ad41af13fccd20"} Nov 28 17:23:25 crc kubenswrapper[5024]: I1128 17:23:25.837707 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-74757657c9-s2n28" podStartSLOduration=2.837681346 podStartE2EDuration="2.837681346s" podCreationTimestamp="2025-11-28 17:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:25.834968629 +0000 UTC m=+1507.883889534" watchObservedRunningTime="2025-11-28 17:23:25.837681346 +0000 UTC m=+1507.886602251" Nov 28 17:23:25 crc kubenswrapper[5024]: I1128 17:23:25.934156 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:25 crc kubenswrapper[5024]: I1128 17:23:25.934534 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="ceilometer-central-agent" containerID="cri-o://0bb475ce26086a657e3b2554e7e4a9de5919013fec482f54b26765bd424f8b92" gracePeriod=30 Nov 28 17:23:25 crc kubenswrapper[5024]: I1128 17:23:25.935639 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="sg-core" containerID="cri-o://715ed801f1e596f73d0112b81e786911311908f9e8d1751465ac8ba6857ef75e" gracePeriod=30 Nov 28 17:23:25 crc kubenswrapper[5024]: I1128 17:23:25.935795 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="proxy-httpd" containerID="cri-o://f6dae8af7c9bd65f23883ec145b935412c7390547cdac3b8cd42e249faf851da" gracePeriod=30 Nov 28 17:23:25 crc kubenswrapper[5024]: I1128 17:23:25.935855 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="ceilometer-notification-agent" containerID="cri-o://5e06065ce6d7b2c1f85ff98da035c9dc824ed8fd519c3136f7fd10e99950a85b" gracePeriod=30 Nov 28 17:23:25 crc kubenswrapper[5024]: I1128 17:23:25.950395 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.203:3000/\": EOF" Nov 28 17:23:26 crc kubenswrapper[5024]: I1128 17:23:26.296433 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:26 crc kubenswrapper[5024]: I1128 17:23:26.296951 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" containerName="glance-log" containerID="cri-o://cbe91c1748938d607ee44052b70c7a93c5bb58ef06660b54026ef1ad41c2a9a1" gracePeriod=30 Nov 28 17:23:26 crc kubenswrapper[5024]: I1128 17:23:26.297055 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" containerName="glance-httpd" containerID="cri-o://56273b277d8c4665b45584e9d4e26184d157497f5cef2001a174116cde8851f8" gracePeriod=30 Nov 28 17:23:26 crc kubenswrapper[5024]: I1128 17:23:26.851454 5024 generic.go:334] "Generic (PLEG): container finished" podID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerID="f6dae8af7c9bd65f23883ec145b935412c7390547cdac3b8cd42e249faf851da" exitCode=0 Nov 28 17:23:26 crc kubenswrapper[5024]: I1128 17:23:26.851905 5024 generic.go:334] "Generic (PLEG): container finished" podID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerID="715ed801f1e596f73d0112b81e786911311908f9e8d1751465ac8ba6857ef75e" exitCode=2 Nov 28 17:23:26 crc kubenswrapper[5024]: I1128 17:23:26.851924 5024 generic.go:334] "Generic (PLEG): container finished" podID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerID="5e06065ce6d7b2c1f85ff98da035c9dc824ed8fd519c3136f7fd10e99950a85b" exitCode=0 Nov 28 17:23:26 crc kubenswrapper[5024]: I1128 17:23:26.851934 5024 generic.go:334] "Generic (PLEG): container finished" podID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerID="0bb475ce26086a657e3b2554e7e4a9de5919013fec482f54b26765bd424f8b92" exitCode=0 Nov 28 17:23:26 crc kubenswrapper[5024]: I1128 17:23:26.851502 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"826b547a-5534-4c11-83b5-f09a5d93e6c0","Type":"ContainerDied","Data":"f6dae8af7c9bd65f23883ec145b935412c7390547cdac3b8cd42e249faf851da"} Nov 28 17:23:26 crc kubenswrapper[5024]: I1128 17:23:26.852003 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"826b547a-5534-4c11-83b5-f09a5d93e6c0","Type":"ContainerDied","Data":"715ed801f1e596f73d0112b81e786911311908f9e8d1751465ac8ba6857ef75e"} Nov 28 17:23:26 crc kubenswrapper[5024]: I1128 17:23:26.852014 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"826b547a-5534-4c11-83b5-f09a5d93e6c0","Type":"ContainerDied","Data":"5e06065ce6d7b2c1f85ff98da035c9dc824ed8fd519c3136f7fd10e99950a85b"} Nov 28 17:23:26 crc kubenswrapper[5024]: I1128 17:23:26.852036 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"826b547a-5534-4c11-83b5-f09a5d93e6c0","Type":"ContainerDied","Data":"0bb475ce26086a657e3b2554e7e4a9de5919013fec482f54b26765bd424f8b92"} Nov 28 17:23:26 crc kubenswrapper[5024]: I1128 17:23:26.860800 5024 generic.go:334] "Generic (PLEG): container finished" podID="848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" containerID="cbe91c1748938d607ee44052b70c7a93c5bb58ef06660b54026ef1ad41c2a9a1" exitCode=143 Nov 28 17:23:26 crc kubenswrapper[5024]: I1128 17:23:26.860889 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9","Type":"ContainerDied","Data":"cbe91c1748938d607ee44052b70c7a93c5bb58ef06660b54026ef1ad41c2a9a1"} Nov 28 17:23:26 crc kubenswrapper[5024]: I1128 17:23:26.861054 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:28 crc kubenswrapper[5024]: I1128 17:23:28.334273 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:28 crc kubenswrapper[5024]: I1128 17:23:28.334821 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e1435cd0-7a59-45be-9658-d875edd55a7f" containerName="glance-log" containerID="cri-o://cfd1fe099070594ae6d1d27eac655e81370e8102945e843cf4da33af61162ce0" gracePeriod=30 Nov 28 17:23:28 crc kubenswrapper[5024]: I1128 17:23:28.334962 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e1435cd0-7a59-45be-9658-d875edd55a7f" containerName="glance-httpd" containerID="cri-o://9b0e6c32635601ba4854b72f8b26a5a80b5fff1d851b0f3e383fdee73f69a080" gracePeriod=30 Nov 28 17:23:28 crc kubenswrapper[5024]: I1128 17:23:28.935931 5024 generic.go:334] "Generic (PLEG): container finished" podID="e1435cd0-7a59-45be-9658-d875edd55a7f" containerID="cfd1fe099070594ae6d1d27eac655e81370e8102945e843cf4da33af61162ce0" exitCode=143 Nov 28 17:23:28 crc kubenswrapper[5024]: I1128 17:23:28.936009 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1435cd0-7a59-45be-9658-d875edd55a7f","Type":"ContainerDied","Data":"cfd1fe099070594ae6d1d27eac655e81370e8102945e843cf4da33af61162ce0"} Nov 28 17:23:29 crc kubenswrapper[5024]: I1128 17:23:29.957981 5024 generic.go:334] "Generic (PLEG): container finished" podID="848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" containerID="56273b277d8c4665b45584e9d4e26184d157497f5cef2001a174116cde8851f8" exitCode=0 Nov 28 17:23:29 crc kubenswrapper[5024]: I1128 17:23:29.958054 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9","Type":"ContainerDied","Data":"56273b277d8c4665b45584e9d4e26184d157497f5cef2001a174116cde8851f8"} Nov 28 17:23:32 crc kubenswrapper[5024]: I1128 17:23:32.014091 5024 generic.go:334] "Generic (PLEG): container finished" podID="e1435cd0-7a59-45be-9658-d875edd55a7f" containerID="9b0e6c32635601ba4854b72f8b26a5a80b5fff1d851b0f3e383fdee73f69a080" exitCode=0 Nov 28 17:23:32 crc kubenswrapper[5024]: I1128 17:23:32.014311 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1435cd0-7a59-45be-9658-d875edd55a7f","Type":"ContainerDied","Data":"9b0e6c32635601ba4854b72f8b26a5a80b5fff1d851b0f3e383fdee73f69a080"} Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.040116 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"826b547a-5534-4c11-83b5-f09a5d93e6c0","Type":"ContainerDied","Data":"db4483a4d8363ecab0178329624848cb456ba203790f34d300906e26dab944e3"} Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.040183 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db4483a4d8363ecab0178329624848cb456ba203790f34d300906e26dab944e3" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.261517 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.396891 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-sg-core-conf-yaml\") pod \"826b547a-5534-4c11-83b5-f09a5d93e6c0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.397313 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n8x2\" (UniqueName: \"kubernetes.io/projected/826b547a-5534-4c11-83b5-f09a5d93e6c0-kube-api-access-7n8x2\") pod \"826b547a-5534-4c11-83b5-f09a5d93e6c0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.397467 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-config-data\") pod \"826b547a-5534-4c11-83b5-f09a5d93e6c0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.397523 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-scripts\") pod \"826b547a-5534-4c11-83b5-f09a5d93e6c0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.397579 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-combined-ca-bundle\") pod \"826b547a-5534-4c11-83b5-f09a5d93e6c0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.397613 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/826b547a-5534-4c11-83b5-f09a5d93e6c0-log-httpd\") pod \"826b547a-5534-4c11-83b5-f09a5d93e6c0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.397647 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/826b547a-5534-4c11-83b5-f09a5d93e6c0-run-httpd\") pod \"826b547a-5534-4c11-83b5-f09a5d93e6c0\" (UID: \"826b547a-5534-4c11-83b5-f09a5d93e6c0\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.400161 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/826b547a-5534-4c11-83b5-f09a5d93e6c0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "826b547a-5534-4c11-83b5-f09a5d93e6c0" (UID: "826b547a-5534-4c11-83b5-f09a5d93e6c0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.400726 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/826b547a-5534-4c11-83b5-f09a5d93e6c0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "826b547a-5534-4c11-83b5-f09a5d93e6c0" (UID: "826b547a-5534-4c11-83b5-f09a5d93e6c0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.415152 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-scripts" (OuterVolumeSpecName: "scripts") pod "826b547a-5534-4c11-83b5-f09a5d93e6c0" (UID: "826b547a-5534-4c11-83b5-f09a5d93e6c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.415174 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826b547a-5534-4c11-83b5-f09a5d93e6c0-kube-api-access-7n8x2" (OuterVolumeSpecName: "kube-api-access-7n8x2") pod "826b547a-5534-4c11-83b5-f09a5d93e6c0" (UID: "826b547a-5534-4c11-83b5-f09a5d93e6c0"). InnerVolumeSpecName "kube-api-access-7n8x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.484994 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "826b547a-5534-4c11-83b5-f09a5d93e6c0" (UID: "826b547a-5534-4c11-83b5-f09a5d93e6c0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.500114 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.500154 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/826b547a-5534-4c11-83b5-f09a5d93e6c0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.500168 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/826b547a-5534-4c11-83b5-f09a5d93e6c0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.500178 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.500187 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n8x2\" (UniqueName: \"kubernetes.io/projected/826b547a-5534-4c11-83b5-f09a5d93e6c0-kube-api-access-7n8x2\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.521898 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "826b547a-5534-4c11-83b5-f09a5d93e6c0" (UID: "826b547a-5534-4c11-83b5-f09a5d93e6c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.534474 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.599263 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.615341 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-internal-tls-certs\") pod \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.615439 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htxm2\" (UniqueName: \"kubernetes.io/projected/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-kube-api-access-htxm2\") pod \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.615606 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-httpd-run\") pod \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.615624 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.615658 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-scripts\") pod \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.615734 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-logs\") pod \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.615761 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-config-data\") pod \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.615794 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-combined-ca-bundle\") pod \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\" (UID: \"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.616464 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.616683 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" (UID: "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.619232 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-logs" (OuterVolumeSpecName: "logs") pod "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" (UID: "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.621316 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-kube-api-access-htxm2" (OuterVolumeSpecName: "kube-api-access-htxm2") pod "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" (UID: "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9"). InnerVolumeSpecName "kube-api-access-htxm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.625571 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" (UID: "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.626357 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-scripts" (OuterVolumeSpecName: "scripts") pod "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" (UID: "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.647630 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-config-data" (OuterVolumeSpecName: "config-data") pod "826b547a-5534-4c11-83b5-f09a5d93e6c0" (UID: "826b547a-5534-4c11-83b5-f09a5d93e6c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.709898 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" (UID: "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.717929 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1435cd0-7a59-45be-9658-d875edd55a7f-logs\") pod \"e1435cd0-7a59-45be-9658-d875edd55a7f\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.718136 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-scripts\") pod \"e1435cd0-7a59-45be-9658-d875edd55a7f\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.718164 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-public-tls-certs\") pod \"e1435cd0-7a59-45be-9658-d875edd55a7f\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.718234 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1435cd0-7a59-45be-9658-d875edd55a7f-httpd-run\") pod \"e1435cd0-7a59-45be-9658-d875edd55a7f\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.718290 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-combined-ca-bundle\") pod \"e1435cd0-7a59-45be-9658-d875edd55a7f\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.718340 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"e1435cd0-7a59-45be-9658-d875edd55a7f\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.718438 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44xnn\" (UniqueName: \"kubernetes.io/projected/e1435cd0-7a59-45be-9658-d875edd55a7f-kube-api-access-44xnn\") pod \"e1435cd0-7a59-45be-9658-d875edd55a7f\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.718466 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-config-data\") pod \"e1435cd0-7a59-45be-9658-d875edd55a7f\" (UID: \"e1435cd0-7a59-45be-9658-d875edd55a7f\") " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.718988 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.719006 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.719032 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b547a-5534-4c11-83b5-f09a5d93e6c0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.719042 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htxm2\" (UniqueName: \"kubernetes.io/projected/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-kube-api-access-htxm2\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.719062 5024 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.719071 5024 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.719080 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.719796 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1435cd0-7a59-45be-9658-d875edd55a7f-logs" (OuterVolumeSpecName: "logs") pod "e1435cd0-7a59-45be-9658-d875edd55a7f" (UID: "e1435cd0-7a59-45be-9658-d875edd55a7f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.719956 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1435cd0-7a59-45be-9658-d875edd55a7f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e1435cd0-7a59-45be-9658-d875edd55a7f" (UID: "e1435cd0-7a59-45be-9658-d875edd55a7f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.724185 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "e1435cd0-7a59-45be-9658-d875edd55a7f" (UID: "e1435cd0-7a59-45be-9658-d875edd55a7f"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.726478 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1435cd0-7a59-45be-9658-d875edd55a7f-kube-api-access-44xnn" (OuterVolumeSpecName: "kube-api-access-44xnn") pod "e1435cd0-7a59-45be-9658-d875edd55a7f" (UID: "e1435cd0-7a59-45be-9658-d875edd55a7f"). InnerVolumeSpecName "kube-api-access-44xnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.728792 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-scripts" (OuterVolumeSpecName: "scripts") pod "e1435cd0-7a59-45be-9658-d875edd55a7f" (UID: "e1435cd0-7a59-45be-9658-d875edd55a7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.749444 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" (UID: "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.749541 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-config-data" (OuterVolumeSpecName: "config-data") pod "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" (UID: "848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.766504 5024 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.784735 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1435cd0-7a59-45be-9658-d875edd55a7f" (UID: "e1435cd0-7a59-45be-9658-d875edd55a7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.808203 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.809643 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-74757657c9-s2n28" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.821841 5024 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.821878 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44xnn\" (UniqueName: \"kubernetes.io/projected/e1435cd0-7a59-45be-9658-d875edd55a7f-kube-api-access-44xnn\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.821891 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1435cd0-7a59-45be-9658-d875edd55a7f-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.821906 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.821916 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.821927 5024 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1435cd0-7a59-45be-9658-d875edd55a7f-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.821938 5024 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.821948 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.821986 5024 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.828684 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e1435cd0-7a59-45be-9658-d875edd55a7f" (UID: "e1435cd0-7a59-45be-9658-d875edd55a7f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.924550 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-config-data" (OuterVolumeSpecName: "config-data") pod "e1435cd0-7a59-45be-9658-d875edd55a7f" (UID: "e1435cd0-7a59-45be-9658-d875edd55a7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.929459 5024 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 28 17:23:33 crc kubenswrapper[5024]: I1128 17:23:33.931210 5024 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.035290 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1435cd0-7a59-45be-9658-d875edd55a7f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.036766 5024 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.080426 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1435cd0-7a59-45be-9658-d875edd55a7f","Type":"ContainerDied","Data":"46d781dc7dc48f0ca4945ddafd099ae895d5dca43c32663f0f7f7e76aea9a190"} Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.080487 5024 scope.go:117] "RemoveContainer" containerID="9b0e6c32635601ba4854b72f8b26a5a80b5fff1d851b0f3e383fdee73f69a080" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.080754 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.100434 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9","Type":"ContainerDied","Data":"0845d3b9d9185ed5360d506e809b0f88dfaeafd0852347e01e4c612fa327de22"} Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.100540 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.105606 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"59a27079-b0f6-49dd-8b5e-516096f3d0e8","Type":"ContainerStarted","Data":"1084ddb26f5498816517df45df4cf59c5d0db312b2794fc067fbd6bacd70e64c"} Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.105713 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.129282 5024 scope.go:117] "RemoveContainer" containerID="cfd1fe099070594ae6d1d27eac655e81370e8102945e843cf4da33af61162ce0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.129772 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.267369202 podStartE2EDuration="16.129759583s" podCreationTimestamp="2025-11-28 17:23:18 +0000 UTC" firstStartedPulling="2025-11-28 17:23:19.186665802 +0000 UTC m=+1501.235586707" lastFinishedPulling="2025-11-28 17:23:33.049056183 +0000 UTC m=+1515.097977088" observedRunningTime="2025-11-28 17:23:34.127778276 +0000 UTC m=+1516.176699181" watchObservedRunningTime="2025-11-28 17:23:34.129759583 +0000 UTC m=+1516.178680488" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.181485 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.188689 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.201154 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.213863 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.227229 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:34 crc kubenswrapper[5024]: E1128 17:23:34.227774 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="sg-core" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.227787 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="sg-core" Nov 28 17:23:34 crc kubenswrapper[5024]: E1128 17:23:34.227811 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1435cd0-7a59-45be-9658-d875edd55a7f" containerName="glance-log" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.227817 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1435cd0-7a59-45be-9658-d875edd55a7f" containerName="glance-log" Nov 28 17:23:34 crc kubenswrapper[5024]: E1128 17:23:34.227826 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="ceilometer-notification-agent" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.227832 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="ceilometer-notification-agent" Nov 28 17:23:34 crc kubenswrapper[5024]: E1128 17:23:34.227855 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="ceilometer-central-agent" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.227861 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="ceilometer-central-agent" Nov 28 17:23:34 crc kubenswrapper[5024]: E1128 17:23:34.227872 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" containerName="glance-log" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.227877 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" containerName="glance-log" Nov 28 17:23:34 crc kubenswrapper[5024]: E1128 17:23:34.227895 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1435cd0-7a59-45be-9658-d875edd55a7f" containerName="glance-httpd" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.227900 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1435cd0-7a59-45be-9658-d875edd55a7f" containerName="glance-httpd" Nov 28 17:23:34 crc kubenswrapper[5024]: E1128 17:23:34.227911 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="proxy-httpd" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.227916 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="proxy-httpd" Nov 28 17:23:34 crc kubenswrapper[5024]: E1128 17:23:34.227925 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" containerName="glance-httpd" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.227934 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" containerName="glance-httpd" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.228166 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1435cd0-7a59-45be-9658-d875edd55a7f" containerName="glance-log" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.228177 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="ceilometer-central-agent" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.228190 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="ceilometer-notification-agent" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.228211 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" containerName="glance-log" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.228221 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="sg-core" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.228231 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" containerName="glance-httpd" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.228240 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" containerName="proxy-httpd" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.228256 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1435cd0-7a59-45be-9658-d875edd55a7f" containerName="glance-httpd" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.229425 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.234193 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.234376 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mcqcv" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.234526 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.234631 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.234789 5024 scope.go:117] "RemoveContainer" containerID="56273b277d8c4665b45584e9d4e26184d157497f5cef2001a174116cde8851f8" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.261107 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.279153 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.281127 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.285743 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.287188 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.295776 5024 scope.go:117] "RemoveContainer" containerID="cbe91c1748938d607ee44052b70c7a93c5bb58ef06660b54026ef1ad41c2a9a1" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.303059 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.322920 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.337705 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.347421 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.347474 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f72463ea-b813-4303-bd6a-78c55da993de-logs\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.347493 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f72463ea-b813-4303-bd6a-78c55da993de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.347649 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swqg4\" (UniqueName: \"kubernetes.io/projected/f72463ea-b813-4303-bd6a-78c55da993de-kube-api-access-swqg4\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.347694 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f72463ea-b813-4303-bd6a-78c55da993de-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.347751 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f72463ea-b813-4303-bd6a-78c55da993de-config-data\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.347817 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f72463ea-b813-4303-bd6a-78c55da993de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.347839 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f72463ea-b813-4303-bd6a-78c55da993de-scripts\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.349325 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.352694 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.354631 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.356536 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.365132 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.449762 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.449801 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1642d7d9-4b46-4214-9d51-c3f2681b3f35-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.449825 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-config-data\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.449868 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swqg4\" (UniqueName: \"kubernetes.io/projected/f72463ea-b813-4303-bd6a-78c55da993de-kube-api-access-swqg4\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.449896 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-scripts\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.449914 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a41a632a-a62a-4fa6-8326-d916ab8939e5-log-httpd\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.449946 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f72463ea-b813-4303-bd6a-78c55da993de-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.449971 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpm2z\" (UniqueName: \"kubernetes.io/projected/a41a632a-a62a-4fa6-8326-d916ab8939e5-kube-api-access-fpm2z\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.450074 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.450133 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f72463ea-b813-4303-bd6a-78c55da993de-config-data\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.450810 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdmm9\" (UniqueName: \"kubernetes.io/projected/1642d7d9-4b46-4214-9d51-c3f2681b3f35-kube-api-access-wdmm9\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.450965 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.451010 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f72463ea-b813-4303-bd6a-78c55da993de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.451068 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f72463ea-b813-4303-bd6a-78c55da993de-scripts\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.451112 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1642d7d9-4b46-4214-9d51-c3f2681b3f35-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.451211 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1642d7d9-4b46-4214-9d51-c3f2681b3f35-logs\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.451266 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1642d7d9-4b46-4214-9d51-c3f2681b3f35-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.451386 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1642d7d9-4b46-4214-9d51-c3f2681b3f35-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.451488 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.451520 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f72463ea-b813-4303-bd6a-78c55da993de-logs\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.451538 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f72463ea-b813-4303-bd6a-78c55da993de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.451607 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a41a632a-a62a-4fa6-8326-d916ab8939e5-run-httpd\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.451648 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1642d7d9-4b46-4214-9d51-c3f2681b3f35-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.451992 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.452416 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f72463ea-b813-4303-bd6a-78c55da993de-logs\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.452606 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f72463ea-b813-4303-bd6a-78c55da993de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.456713 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f72463ea-b813-4303-bd6a-78c55da993de-config-data\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.457687 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f72463ea-b813-4303-bd6a-78c55da993de-scripts\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.460797 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f72463ea-b813-4303-bd6a-78c55da993de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.464705 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f72463ea-b813-4303-bd6a-78c55da993de-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.472887 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swqg4\" (UniqueName: \"kubernetes.io/projected/f72463ea-b813-4303-bd6a-78c55da993de-kube-api-access-swqg4\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.496224 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"f72463ea-b813-4303-bd6a-78c55da993de\") " pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.512318 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="826b547a-5534-4c11-83b5-f09a5d93e6c0" path="/var/lib/kubelet/pods/826b547a-5534-4c11-83b5-f09a5d93e6c0/volumes" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.513755 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9" path="/var/lib/kubelet/pods/848a2d6b-0e0f-4c1c-bed2-9f9f06d7beb9/volumes" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.515237 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1435cd0-7a59-45be-9658-d875edd55a7f" path="/var/lib/kubelet/pods/e1435cd0-7a59-45be-9658-d875edd55a7f/volumes" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.554610 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.554730 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdmm9\" (UniqueName: \"kubernetes.io/projected/1642d7d9-4b46-4214-9d51-c3f2681b3f35-kube-api-access-wdmm9\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.554858 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.554896 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1642d7d9-4b46-4214-9d51-c3f2681b3f35-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.554938 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1642d7d9-4b46-4214-9d51-c3f2681b3f35-logs\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.554968 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1642d7d9-4b46-4214-9d51-c3f2681b3f35-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.555015 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1642d7d9-4b46-4214-9d51-c3f2681b3f35-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.555088 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a41a632a-a62a-4fa6-8326-d916ab8939e5-run-httpd\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.555115 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1642d7d9-4b46-4214-9d51-c3f2681b3f35-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.555150 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.555181 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1642d7d9-4b46-4214-9d51-c3f2681b3f35-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.555210 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-config-data\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.555248 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-scripts\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.555273 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a41a632a-a62a-4fa6-8326-d916ab8939e5-log-httpd\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.555329 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpm2z\" (UniqueName: \"kubernetes.io/projected/a41a632a-a62a-4fa6-8326-d916ab8939e5-kube-api-access-fpm2z\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.555583 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1642d7d9-4b46-4214-9d51-c3f2681b3f35-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.555599 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.555609 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1642d7d9-4b46-4214-9d51-c3f2681b3f35-logs\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.556217 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a41a632a-a62a-4fa6-8326-d916ab8939e5-run-httpd\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.556239 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a41a632a-a62a-4fa6-8326-d916ab8939e5-log-httpd\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.564789 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-config-data\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.565431 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-scripts\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.565932 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1642d7d9-4b46-4214-9d51-c3f2681b3f35-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.566090 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1642d7d9-4b46-4214-9d51-c3f2681b3f35-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.569851 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1642d7d9-4b46-4214-9d51-c3f2681b3f35-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.584687 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.584705 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1642d7d9-4b46-4214-9d51-c3f2681b3f35-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.585261 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.586514 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdmm9\" (UniqueName: \"kubernetes.io/projected/1642d7d9-4b46-4214-9d51-c3f2681b3f35-kube-api-access-wdmm9\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.593325 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.620974 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1642d7d9-4b46-4214-9d51-c3f2681b3f35\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.620972 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpm2z\" (UniqueName: \"kubernetes.io/projected/a41a632a-a62a-4fa6-8326-d916ab8939e5-kube-api-access-fpm2z\") pod \"ceilometer-0\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.690077 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:23:34 crc kubenswrapper[5024]: I1128 17:23:34.907858 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:35 crc kubenswrapper[5024]: I1128 17:23:35.392788 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:35 crc kubenswrapper[5024]: I1128 17:23:35.449121 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:23:35 crc kubenswrapper[5024]: I1128 17:23:35.687812 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:23:35 crc kubenswrapper[5024]: W1128 17:23:35.711484 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1642d7d9_4b46_4214_9d51_c3f2681b3f35.slice/crio-b19a0edfbc1b538972c00f37c301855e18eb6e7f638f0c03a46d6b15e3125813 WatchSource:0}: Error finding container b19a0edfbc1b538972c00f37c301855e18eb6e7f638f0c03a46d6b15e3125813: Status 404 returned error can't find the container with id b19a0edfbc1b538972c00f37c301855e18eb6e7f638f0c03a46d6b15e3125813 Nov 28 17:23:35 crc kubenswrapper[5024]: I1128 17:23:35.949079 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.155705 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1642d7d9-4b46-4214-9d51-c3f2681b3f35","Type":"ContainerStarted","Data":"b19a0edfbc1b538972c00f37c301855e18eb6e7f638f0c03a46d6b15e3125813"} Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.163802 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f72463ea-b813-4303-bd6a-78c55da993de","Type":"ContainerStarted","Data":"ee33974a4bbd7d4cf96407085c5ef862060637c715cb5b1e58c0ea923ba644d5"} Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.165692 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a41a632a-a62a-4fa6-8326-d916ab8939e5","Type":"ContainerStarted","Data":"386caeabb788602cfad77065c7b4e990f2911d467bca27128cb528afd9e64749"} Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.696920 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.863232 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-etc-machine-id\") pod \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.863809 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-combined-ca-bundle\") pod \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.864086 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-scripts\") pod \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.864139 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-logs\") pod \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.864134 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d40d007c-1b46-49b2-b8ef-5c5332ba74b7" (UID: "d40d007c-1b46-49b2-b8ef-5c5332ba74b7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.864262 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f95nh\" (UniqueName: \"kubernetes.io/projected/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-kube-api-access-f95nh\") pod \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.864325 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-config-data\") pod \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.864440 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-config-data-custom\") pod \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\" (UID: \"d40d007c-1b46-49b2-b8ef-5c5332ba74b7\") " Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.865159 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-logs" (OuterVolumeSpecName: "logs") pod "d40d007c-1b46-49b2-b8ef-5c5332ba74b7" (UID: "d40d007c-1b46-49b2-b8ef-5c5332ba74b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.865666 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.865691 5024 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.877184 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d40d007c-1b46-49b2-b8ef-5c5332ba74b7" (UID: "d40d007c-1b46-49b2-b8ef-5c5332ba74b7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.877316 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-scripts" (OuterVolumeSpecName: "scripts") pod "d40d007c-1b46-49b2-b8ef-5c5332ba74b7" (UID: "d40d007c-1b46-49b2-b8ef-5c5332ba74b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.877742 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-kube-api-access-f95nh" (OuterVolumeSpecName: "kube-api-access-f95nh") pod "d40d007c-1b46-49b2-b8ef-5c5332ba74b7" (UID: "d40d007c-1b46-49b2-b8ef-5c5332ba74b7"). InnerVolumeSpecName "kube-api-access-f95nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.936347 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d40d007c-1b46-49b2-b8ef-5c5332ba74b7" (UID: "d40d007c-1b46-49b2-b8ef-5c5332ba74b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.968766 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.968805 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.968815 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f95nh\" (UniqueName: \"kubernetes.io/projected/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-kube-api-access-f95nh\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.968826 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:36 crc kubenswrapper[5024]: I1128 17:23:36.976594 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-config-data" (OuterVolumeSpecName: "config-data") pod "d40d007c-1b46-49b2-b8ef-5c5332ba74b7" (UID: "d40d007c-1b46-49b2-b8ef-5c5332ba74b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.071341 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40d007c-1b46-49b2-b8ef-5c5332ba74b7-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.184866 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a41a632a-a62a-4fa6-8326-d916ab8939e5","Type":"ContainerStarted","Data":"82f69f7b7e9ee7387ced8e1aa87d6491efffe0f2dec97ef139107b8df63ef8e8"} Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.187345 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f72463ea-b813-4303-bd6a-78c55da993de","Type":"ContainerStarted","Data":"eedf648b9abdd42f076cd95dd1f72444e0dc2bb242e94674cdd0ef45f0d0c358"} Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.190068 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1642d7d9-4b46-4214-9d51-c3f2681b3f35","Type":"ContainerStarted","Data":"9dc3a7d721f00f270e73950f1dc3e15d82b11a734ce968f2d058d2629918bf8c"} Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.197978 5024 generic.go:334] "Generic (PLEG): container finished" podID="d40d007c-1b46-49b2-b8ef-5c5332ba74b7" containerID="3976e9447f0abe3435e5849f89ae77b814686ce3093e4ae7c3d0c6f6edca8941" exitCode=137 Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.198067 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d40d007c-1b46-49b2-b8ef-5c5332ba74b7","Type":"ContainerDied","Data":"3976e9447f0abe3435e5849f89ae77b814686ce3093e4ae7c3d0c6f6edca8941"} Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.198077 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.198103 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d40d007c-1b46-49b2-b8ef-5c5332ba74b7","Type":"ContainerDied","Data":"9db52a7a5bcd07b5a58b6b48be61ec7f0dbd23760c1d92572a1e630d460d86e0"} Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.198141 5024 scope.go:117] "RemoveContainer" containerID="3976e9447f0abe3435e5849f89ae77b814686ce3093e4ae7c3d0c6f6edca8941" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.370652 5024 scope.go:117] "RemoveContainer" containerID="8c645f19ca0df6e48f5bf2ffd1f71ba8abc9af80c7274e25622aa89be140731a" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.418348 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.464852 5024 scope.go:117] "RemoveContainer" containerID="3976e9447f0abe3435e5849f89ae77b814686ce3093e4ae7c3d0c6f6edca8941" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.471229 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:23:37 crc kubenswrapper[5024]: E1128 17:23:37.472449 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3976e9447f0abe3435e5849f89ae77b814686ce3093e4ae7c3d0c6f6edca8941\": container with ID starting with 3976e9447f0abe3435e5849f89ae77b814686ce3093e4ae7c3d0c6f6edca8941 not found: ID does not exist" containerID="3976e9447f0abe3435e5849f89ae77b814686ce3093e4ae7c3d0c6f6edca8941" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.472552 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3976e9447f0abe3435e5849f89ae77b814686ce3093e4ae7c3d0c6f6edca8941"} err="failed to get container status \"3976e9447f0abe3435e5849f89ae77b814686ce3093e4ae7c3d0c6f6edca8941\": rpc error: code = NotFound desc = could not find container \"3976e9447f0abe3435e5849f89ae77b814686ce3093e4ae7c3d0c6f6edca8941\": container with ID starting with 3976e9447f0abe3435e5849f89ae77b814686ce3093e4ae7c3d0c6f6edca8941 not found: ID does not exist" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.472666 5024 scope.go:117] "RemoveContainer" containerID="8c645f19ca0df6e48f5bf2ffd1f71ba8abc9af80c7274e25622aa89be140731a" Nov 28 17:23:37 crc kubenswrapper[5024]: E1128 17:23:37.476425 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c645f19ca0df6e48f5bf2ffd1f71ba8abc9af80c7274e25622aa89be140731a\": container with ID starting with 8c645f19ca0df6e48f5bf2ffd1f71ba8abc9af80c7274e25622aa89be140731a not found: ID does not exist" containerID="8c645f19ca0df6e48f5bf2ffd1f71ba8abc9af80c7274e25622aa89be140731a" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.476474 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c645f19ca0df6e48f5bf2ffd1f71ba8abc9af80c7274e25622aa89be140731a"} err="failed to get container status \"8c645f19ca0df6e48f5bf2ffd1f71ba8abc9af80c7274e25622aa89be140731a\": rpc error: code = NotFound desc = could not find container \"8c645f19ca0df6e48f5bf2ffd1f71ba8abc9af80c7274e25622aa89be140731a\": container with ID starting with 8c645f19ca0df6e48f5bf2ffd1f71ba8abc9af80c7274e25622aa89be140731a not found: ID does not exist" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.488415 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:23:37 crc kubenswrapper[5024]: E1128 17:23:37.489196 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40d007c-1b46-49b2-b8ef-5c5332ba74b7" containerName="cinder-api-log" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.489238 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40d007c-1b46-49b2-b8ef-5c5332ba74b7" containerName="cinder-api-log" Nov 28 17:23:37 crc kubenswrapper[5024]: E1128 17:23:37.489257 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40d007c-1b46-49b2-b8ef-5c5332ba74b7" containerName="cinder-api" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.489266 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40d007c-1b46-49b2-b8ef-5c5332ba74b7" containerName="cinder-api" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.489671 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40d007c-1b46-49b2-b8ef-5c5332ba74b7" containerName="cinder-api" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.489710 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40d007c-1b46-49b2-b8ef-5c5332ba74b7" containerName="cinder-api-log" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.491534 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.494072 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.494179 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.494574 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.519375 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.604347 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-scripts\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.604418 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-config-data-custom\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.604503 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.604591 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-logs\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.604672 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-config-data\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.604720 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mklvf\" (UniqueName: \"kubernetes.io/projected/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-kube-api-access-mklvf\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.604754 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.604885 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.604913 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.756892 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-scripts\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.757268 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-config-data-custom\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.757327 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.757346 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-logs\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.757401 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-config-data\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.757437 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mklvf\" (UniqueName: \"kubernetes.io/projected/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-kube-api-access-mklvf\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.757455 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.757538 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.757560 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.760765 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.761054 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-logs\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.763947 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-scripts\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.764235 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.764600 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.765430 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.766805 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-config-data\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.769837 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-config-data-custom\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.780773 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mklvf\" (UniqueName: \"kubernetes.io/projected/f9ace56d-5740-45f2-b8ac-04c2ed9b4270-kube-api-access-mklvf\") pod \"cinder-api-0\" (UID: \"f9ace56d-5740-45f2-b8ac-04c2ed9b4270\") " pod="openstack/cinder-api-0" Nov 28 17:23:37 crc kubenswrapper[5024]: I1128 17:23:37.820265 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.215066 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f72463ea-b813-4303-bd6a-78c55da993de","Type":"ContainerStarted","Data":"3dd7064ae2c1a4ec7cc5febb14129a429fbbf9830cbc037cd18251a611c1335b"} Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.218788 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1642d7d9-4b46-4214-9d51-c3f2681b3f35","Type":"ContainerStarted","Data":"10459d41a00a8000d512dacf1ef027b9c7b15358170bca60fa20575d8bdd545d"} Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.238833 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a41a632a-a62a-4fa6-8326-d916ab8939e5","Type":"ContainerStarted","Data":"8f60047233d3a2b49addc663cc7233ac40ef32f8aac275dc99dc5b688c91832f"} Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.254675 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.254654161 podStartE2EDuration="4.254654161s" podCreationTimestamp="2025-11-28 17:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:38.236391073 +0000 UTC m=+1520.285311968" watchObservedRunningTime="2025-11-28 17:23:38.254654161 +0000 UTC m=+1520.303575066" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.303240 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.303208329 podStartE2EDuration="4.303208329s" podCreationTimestamp="2025-11-28 17:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:38.260725324 +0000 UTC m=+1520.309646229" watchObservedRunningTime="2025-11-28 17:23:38.303208329 +0000 UTC m=+1520.352129234" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.327275 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.513669 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40d007c-1b46-49b2-b8ef-5c5332ba74b7" path="/var/lib/kubelet/pods/d40d007c-1b46-49b2-b8ef-5c5332ba74b7/volumes" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.718038 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-d5797c764-zffzc"] Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.722195 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.735486 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-hl9cn" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.735856 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.735908 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.785947 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d5797c764-zffzc"] Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.887303 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-84gfm"] Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.890088 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.904644 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqfv7\" (UniqueName: \"kubernetes.io/projected/aca9dafd-8069-42d9-b644-12fc96509330-kube-api-access-xqfv7\") pod \"heat-engine-d5797c764-zffzc\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.905006 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-config-data\") pod \"heat-engine-d5797c764-zffzc\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.905163 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-config-data-custom\") pod \"heat-engine-d5797c764-zffzc\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.905290 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-combined-ca-bundle\") pod \"heat-engine-d5797c764-zffzc\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.905451 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5d56577dc4-kw6js"] Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.907185 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.915295 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.928572 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-84gfm"] Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.943287 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5d56577dc4-kw6js"] Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.978358 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7d7fb5b5d9-qtddd"] Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.980673 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:38 crc kubenswrapper[5024]: I1128 17:23:38.985327 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.005594 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7d7fb5b5d9-qtddd"] Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.007097 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-config-data-custom\") pod \"heat-cfnapi-5d56577dc4-kw6js\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.007185 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-config-data\") pod \"heat-cfnapi-5d56577dc4-kw6js\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.007214 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-config-data\") pod \"heat-engine-d5797c764-zffzc\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.007246 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hdlq\" (UniqueName: \"kubernetes.io/projected/8494f18f-160a-41ae-802d-a490037f0aec-kube-api-access-5hdlq\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.007297 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.007323 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-config-data-custom\") pod \"heat-engine-d5797c764-zffzc\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.007352 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-config\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.007370 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-combined-ca-bundle\") pod \"heat-engine-d5797c764-zffzc\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.007398 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-combined-ca-bundle\") pod \"heat-cfnapi-5d56577dc4-kw6js\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.007444 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqfv7\" (UniqueName: \"kubernetes.io/projected/aca9dafd-8069-42d9-b644-12fc96509330-kube-api-access-xqfv7\") pod \"heat-engine-d5797c764-zffzc\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.007478 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.007503 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.007525 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbhcp\" (UniqueName: \"kubernetes.io/projected/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-kube-api-access-zbhcp\") pod \"heat-cfnapi-5d56577dc4-kw6js\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.007547 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.017875 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-combined-ca-bundle\") pod \"heat-engine-d5797c764-zffzc\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.018722 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-config-data\") pod \"heat-engine-d5797c764-zffzc\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.062787 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-config-data-custom\") pod \"heat-engine-d5797c764-zffzc\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.068423 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqfv7\" (UniqueName: \"kubernetes.io/projected/aca9dafd-8069-42d9-b644-12fc96509330-kube-api-access-xqfv7\") pod \"heat-engine-d5797c764-zffzc\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.111902 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.111984 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-config\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.112041 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-combined-ca-bundle\") pod \"heat-api-7d7fb5b5d9-qtddd\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.112073 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-combined-ca-bundle\") pod \"heat-cfnapi-5d56577dc4-kw6js\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.112134 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.112159 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8nbw\" (UniqueName: \"kubernetes.io/projected/6351b489-49ff-47a9-bb4f-26632893416c-kube-api-access-m8nbw\") pod \"heat-api-7d7fb5b5d9-qtddd\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.112181 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.112204 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbhcp\" (UniqueName: \"kubernetes.io/projected/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-kube-api-access-zbhcp\") pod \"heat-cfnapi-5d56577dc4-kw6js\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.112228 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.112258 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-config-data-custom\") pod \"heat-cfnapi-5d56577dc4-kw6js\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.112281 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-config-data-custom\") pod \"heat-api-7d7fb5b5d9-qtddd\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.112333 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-config-data\") pod \"heat-api-7d7fb5b5d9-qtddd\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.112363 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-config-data\") pod \"heat-cfnapi-5d56577dc4-kw6js\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.112399 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hdlq\" (UniqueName: \"kubernetes.io/projected/8494f18f-160a-41ae-802d-a490037f0aec-kube-api-access-5hdlq\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.113441 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.122209 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.124380 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-config\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.125898 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-config-data-custom\") pod \"heat-cfnapi-5d56577dc4-kw6js\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.126730 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-config-data\") pod \"heat-cfnapi-5d56577dc4-kw6js\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.127336 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-combined-ca-bundle\") pod \"heat-cfnapi-5d56577dc4-kw6js\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.131776 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.134511 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbhcp\" (UniqueName: \"kubernetes.io/projected/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-kube-api-access-zbhcp\") pod \"heat-cfnapi-5d56577dc4-kw6js\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.138597 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hdlq\" (UniqueName: \"kubernetes.io/projected/8494f18f-160a-41ae-802d-a490037f0aec-kube-api-access-5hdlq\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.138993 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-84gfm\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.216746 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-config-data-custom\") pod \"heat-api-7d7fb5b5d9-qtddd\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.216822 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-config-data\") pod \"heat-api-7d7fb5b5d9-qtddd\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.216934 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-combined-ca-bundle\") pod \"heat-api-7d7fb5b5d9-qtddd\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.217002 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8nbw\" (UniqueName: \"kubernetes.io/projected/6351b489-49ff-47a9-bb4f-26632893416c-kube-api-access-m8nbw\") pod \"heat-api-7d7fb5b5d9-qtddd\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.222961 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-combined-ca-bundle\") pod \"heat-api-7d7fb5b5d9-qtddd\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.223213 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-config-data-custom\") pod \"heat-api-7d7fb5b5d9-qtddd\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.223852 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-config-data\") pod \"heat-api-7d7fb5b5d9-qtddd\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.245825 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8nbw\" (UniqueName: \"kubernetes.io/projected/6351b489-49ff-47a9-bb4f-26632893416c-kube-api-access-m8nbw\") pod \"heat-api-7d7fb5b5d9-qtddd\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.264257 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a41a632a-a62a-4fa6-8326-d916ab8939e5","Type":"ContainerStarted","Data":"bf757829288d6b021bc184437632f82b85387f3458f8353c874e74d9ab14a1ea"} Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.278816 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f9ace56d-5740-45f2-b8ac-04c2ed9b4270","Type":"ContainerStarted","Data":"b9cf411c283b933803f6b6640a1148611dfe7cf430c93c770c26185edddb3571"} Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.347871 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.363468 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.705448 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:39 crc kubenswrapper[5024]: I1128 17:23:39.707712 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:40 crc kubenswrapper[5024]: W1128 17:23:40.163661 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaca9dafd_8069_42d9_b644_12fc96509330.slice/crio-716072e8b8d484f5e3823ce24d09a0c4e17556392bbd8d9cc9085b9dae4ae6e2 WatchSource:0}: Error finding container 716072e8b8d484f5e3823ce24d09a0c4e17556392bbd8d9cc9085b9dae4ae6e2: Status 404 returned error can't find the container with id 716072e8b8d484f5e3823ce24d09a0c4e17556392bbd8d9cc9085b9dae4ae6e2 Nov 28 17:23:40 crc kubenswrapper[5024]: I1128 17:23:40.166634 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d5797c764-zffzc"] Nov 28 17:23:40 crc kubenswrapper[5024]: I1128 17:23:40.277161 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-84gfm"] Nov 28 17:23:40 crc kubenswrapper[5024]: I1128 17:23:40.367882 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d5797c764-zffzc" event={"ID":"aca9dafd-8069-42d9-b644-12fc96509330","Type":"ContainerStarted","Data":"716072e8b8d484f5e3823ce24d09a0c4e17556392bbd8d9cc9085b9dae4ae6e2"} Nov 28 17:23:40 crc kubenswrapper[5024]: I1128 17:23:40.369284 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" event={"ID":"8494f18f-160a-41ae-802d-a490037f0aec","Type":"ContainerStarted","Data":"adbeadb7906ab8d9e462ca8df638ed977cb206a1fb1ae73a2c9be6bdb8293a57"} Nov 28 17:23:40 crc kubenswrapper[5024]: I1128 17:23:40.385632 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f9ace56d-5740-45f2-b8ac-04c2ed9b4270","Type":"ContainerStarted","Data":"1656ecfbb0af0a8a65cadfe4d5143b6577fc7edbe92d9b6e814f4b775e3d837f"} Nov 28 17:23:40 crc kubenswrapper[5024]: I1128 17:23:40.527371 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7d7fb5b5d9-qtddd"] Nov 28 17:23:40 crc kubenswrapper[5024]: W1128 17:23:40.534775 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6351b489_49ff_47a9_bb4f_26632893416c.slice/crio-bdb45099d84b938068bc051a2f06f8ef6592ed2c3353a61ffe59c82472e2ffa9 WatchSource:0}: Error finding container bdb45099d84b938068bc051a2f06f8ef6592ed2c3353a61ffe59c82472e2ffa9: Status 404 returned error can't find the container with id bdb45099d84b938068bc051a2f06f8ef6592ed2c3353a61ffe59c82472e2ffa9 Nov 28 17:23:40 crc kubenswrapper[5024]: I1128 17:23:40.773691 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5d56577dc4-kw6js"] Nov 28 17:23:40 crc kubenswrapper[5024]: W1128 17:23:40.774210 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa3cba9b_0c1f_4d10_87cc_7159d1e1b0c7.slice/crio-9366ff0ed727d2fde23868a5c7782bd9a11a717a40aabadb1103941deb18aa6e WatchSource:0}: Error finding container 9366ff0ed727d2fde23868a5c7782bd9a11a717a40aabadb1103941deb18aa6e: Status 404 returned error can't find the container with id 9366ff0ed727d2fde23868a5c7782bd9a11a717a40aabadb1103941deb18aa6e Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.399716 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a41a632a-a62a-4fa6-8326-d916ab8939e5","Type":"ContainerStarted","Data":"3a8f40b037ce3bd2d5611352de6ae3e7dd3d44add89c632e74057caaa7410e2b"} Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.400618 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="ceilometer-central-agent" containerID="cri-o://82f69f7b7e9ee7387ced8e1aa87d6491efffe0f2dec97ef139107b8df63ef8e8" gracePeriod=30 Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.400651 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.400673 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="sg-core" containerID="cri-o://bf757829288d6b021bc184437632f82b85387f3458f8353c874e74d9ab14a1ea" gracePeriod=30 Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.400723 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="ceilometer-notification-agent" containerID="cri-o://8f60047233d3a2b49addc663cc7233ac40ef32f8aac275dc99dc5b688c91832f" gracePeriod=30 Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.400698 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="proxy-httpd" containerID="cri-o://3a8f40b037ce3bd2d5611352de6ae3e7dd3d44add89c632e74057caaa7410e2b" gracePeriod=30 Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.405788 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f9ace56d-5740-45f2-b8ac-04c2ed9b4270","Type":"ContainerStarted","Data":"6f1639279accb86b6a911d5bb8e22f9f9354c6a098e0d201afecfb94d07e6288"} Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.405992 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.408927 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d5797c764-zffzc" event={"ID":"aca9dafd-8069-42d9-b644-12fc96509330","Type":"ContainerStarted","Data":"c64d3ed6fe34d3578fb2e3b55010dea4e69b48fd200e96d8a82c7df82889991c"} Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.408979 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.411105 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d7fb5b5d9-qtddd" event={"ID":"6351b489-49ff-47a9-bb4f-26632893416c","Type":"ContainerStarted","Data":"bdb45099d84b938068bc051a2f06f8ef6592ed2c3353a61ffe59c82472e2ffa9"} Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.412763 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" event={"ID":"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7","Type":"ContainerStarted","Data":"9366ff0ed727d2fde23868a5c7782bd9a11a717a40aabadb1103941deb18aa6e"} Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.416639 5024 generic.go:334] "Generic (PLEG): container finished" podID="8494f18f-160a-41ae-802d-a490037f0aec" containerID="18b98c970ad1d279ab9432575442e5c4992bde3230386311bda136dc0b571473" exitCode=0 Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.416681 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" event={"ID":"8494f18f-160a-41ae-802d-a490037f0aec","Type":"ContainerDied","Data":"18b98c970ad1d279ab9432575442e5c4992bde3230386311bda136dc0b571473"} Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.439984 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.009039132 podStartE2EDuration="7.439963442s" podCreationTimestamp="2025-11-28 17:23:34 +0000 UTC" firstStartedPulling="2025-11-28 17:23:35.399311589 +0000 UTC m=+1517.448232494" lastFinishedPulling="2025-11-28 17:23:40.830235899 +0000 UTC m=+1522.879156804" observedRunningTime="2025-11-28 17:23:41.432660725 +0000 UTC m=+1523.481581630" watchObservedRunningTime="2025-11-28 17:23:41.439963442 +0000 UTC m=+1523.488884347" Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.506144 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-d5797c764-zffzc" podStartSLOduration=3.506118228 podStartE2EDuration="3.506118228s" podCreationTimestamp="2025-11-28 17:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:41.49348431 +0000 UTC m=+1523.542405225" watchObservedRunningTime="2025-11-28 17:23:41.506118228 +0000 UTC m=+1523.555039133" Nov 28 17:23:41 crc kubenswrapper[5024]: I1128 17:23:41.535567 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.535542233 podStartE2EDuration="4.535542233s" podCreationTimestamp="2025-11-28 17:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:41.518443588 +0000 UTC m=+1523.567364513" watchObservedRunningTime="2025-11-28 17:23:41.535542233 +0000 UTC m=+1523.584463138" Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.436372 5024 generic.go:334] "Generic (PLEG): container finished" podID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerID="3a8f40b037ce3bd2d5611352de6ae3e7dd3d44add89c632e74057caaa7410e2b" exitCode=0 Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.436947 5024 generic.go:334] "Generic (PLEG): container finished" podID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerID="bf757829288d6b021bc184437632f82b85387f3458f8353c874e74d9ab14a1ea" exitCode=2 Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.436965 5024 generic.go:334] "Generic (PLEG): container finished" podID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerID="8f60047233d3a2b49addc663cc7233ac40ef32f8aac275dc99dc5b688c91832f" exitCode=0 Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.437086 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a41a632a-a62a-4fa6-8326-d916ab8939e5","Type":"ContainerDied","Data":"3a8f40b037ce3bd2d5611352de6ae3e7dd3d44add89c632e74057caaa7410e2b"} Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.437140 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a41a632a-a62a-4fa6-8326-d916ab8939e5","Type":"ContainerDied","Data":"bf757829288d6b021bc184437632f82b85387f3458f8353c874e74d9ab14a1ea"} Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.437163 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a41a632a-a62a-4fa6-8326-d916ab8939e5","Type":"ContainerDied","Data":"8f60047233d3a2b49addc663cc7233ac40ef32f8aac275dc99dc5b688c91832f"} Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.441782 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" event={"ID":"8494f18f-160a-41ae-802d-a490037f0aec","Type":"ContainerStarted","Data":"b5051948f72205fd2db1b4694a7407db7fdbc5f6d2f4c0dbd1899cd0ebad0e8c"} Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.442015 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.461880 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" podStartSLOduration=4.461860225 podStartE2EDuration="4.461860225s" podCreationTimestamp="2025-11-28 17:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:42.458761927 +0000 UTC m=+1524.507682832" watchObservedRunningTime="2025-11-28 17:23:42.461860225 +0000 UTC m=+1524.510781140" Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.682285 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wbkl5"] Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.685387 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.701861 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wbkl5"] Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.776669 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-catalog-content\") pod \"redhat-operators-wbkl5\" (UID: \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\") " pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.776725 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-utilities\") pod \"redhat-operators-wbkl5\" (UID: \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\") " pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.776839 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xmr4\" (UniqueName: \"kubernetes.io/projected/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-kube-api-access-7xmr4\") pod \"redhat-operators-wbkl5\" (UID: \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\") " pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.879469 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-catalog-content\") pod \"redhat-operators-wbkl5\" (UID: \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\") " pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.879524 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-utilities\") pod \"redhat-operators-wbkl5\" (UID: \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\") " pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.880080 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-catalog-content\") pod \"redhat-operators-wbkl5\" (UID: \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\") " pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.880118 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-utilities\") pod \"redhat-operators-wbkl5\" (UID: \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\") " pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.880281 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xmr4\" (UniqueName: \"kubernetes.io/projected/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-kube-api-access-7xmr4\") pod \"redhat-operators-wbkl5\" (UID: \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\") " pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:23:42 crc kubenswrapper[5024]: I1128 17:23:42.912184 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xmr4\" (UniqueName: \"kubernetes.io/projected/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-kube-api-access-7xmr4\") pod \"redhat-operators-wbkl5\" (UID: \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\") " pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:23:43 crc kubenswrapper[5024]: I1128 17:23:43.008698 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.147502 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wbkl5"] Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.475578 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" event={"ID":"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7","Type":"ContainerStarted","Data":"fe919107fb6376a36a8228cf633887e30d4294a4447c75ce52fd69ffed219d9a"} Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.476077 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.494097 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wbkl5" event={"ID":"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda","Type":"ContainerStarted","Data":"4818bc47561afbb61a5c58abf047f386a465103b3671c7eaa76f5e80724fded0"} Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.494156 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wbkl5" event={"ID":"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda","Type":"ContainerStarted","Data":"f0d5b146bc510869d0750443c71776c9a6f85edbe8418e227daac8003662a4a0"} Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.519545 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d7fb5b5d9-qtddd" event={"ID":"6351b489-49ff-47a9-bb4f-26632893416c","Type":"ContainerStarted","Data":"50d9779890245f813a6757f8ffe624036b5368159fbb86b3ace9f1b231fe854a"} Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.519607 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.527258 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" podStartSLOduration=3.743299855 podStartE2EDuration="6.527229212s" podCreationTimestamp="2025-11-28 17:23:38 +0000 UTC" firstStartedPulling="2025-11-28 17:23:40.805728064 +0000 UTC m=+1522.854648969" lastFinishedPulling="2025-11-28 17:23:43.589657421 +0000 UTC m=+1525.638578326" observedRunningTime="2025-11-28 17:23:44.51165347 +0000 UTC m=+1526.560574395" watchObservedRunningTime="2025-11-28 17:23:44.527229212 +0000 UTC m=+1526.576150117" Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.544149 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7d7fb5b5d9-qtddd" podStartSLOduration=3.50832383 podStartE2EDuration="6.544128781s" podCreationTimestamp="2025-11-28 17:23:38 +0000 UTC" firstStartedPulling="2025-11-28 17:23:40.550115824 +0000 UTC m=+1522.599036729" lastFinishedPulling="2025-11-28 17:23:43.585920775 +0000 UTC m=+1525.634841680" observedRunningTime="2025-11-28 17:23:44.543576205 +0000 UTC m=+1526.592497140" watchObservedRunningTime="2025-11-28 17:23:44.544128781 +0000 UTC m=+1526.593049686" Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.585962 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.586032 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.628758 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.641976 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.908230 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.908534 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.952796 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:44 crc kubenswrapper[5024]: I1128 17:23:44.962452 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:45 crc kubenswrapper[5024]: I1128 17:23:45.517796 5024 generic.go:334] "Generic (PLEG): container finished" podID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" containerID="4818bc47561afbb61a5c58abf047f386a465103b3671c7eaa76f5e80724fded0" exitCode=0 Nov 28 17:23:45 crc kubenswrapper[5024]: I1128 17:23:45.519646 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wbkl5" event={"ID":"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda","Type":"ContainerDied","Data":"4818bc47561afbb61a5c58abf047f386a465103b3671c7eaa76f5e80724fded0"} Nov 28 17:23:45 crc kubenswrapper[5024]: I1128 17:23:45.519786 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 17:23:45 crc kubenswrapper[5024]: I1128 17:23:45.520564 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 17:23:45 crc kubenswrapper[5024]: I1128 17:23:45.520889 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:45 crc kubenswrapper[5024]: I1128 17:23:45.520911 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.523115 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-586d869b9-5wnvb"] Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.525138 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.548079 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-8cf7dff68-b7rsd"] Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.549755 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.550733 5024 generic.go:334] "Generic (PLEG): container finished" podID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerID="82f69f7b7e9ee7387ced8e1aa87d6491efffe0f2dec97ef139107b8df63ef8e8" exitCode=0 Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.550820 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.550829 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.551999 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a41a632a-a62a-4fa6-8326-d916ab8939e5","Type":"ContainerDied","Data":"82f69f7b7e9ee7387ced8e1aa87d6491efffe0f2dec97ef139107b8df63ef8e8"} Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.552146 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.552158 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.593149 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-586d869b9-5wnvb"] Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.606110 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8cf7dff68-b7rsd"] Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.634565 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7556978694-8gc2j"] Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.636390 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.663229 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7556978694-8gc2j"] Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.720477 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-combined-ca-bundle\") pod \"heat-engine-586d869b9-5wnvb\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.720592 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-config-data-custom\") pod \"heat-api-8cf7dff68-b7rsd\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.720698 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-config-data\") pod \"heat-api-8cf7dff68-b7rsd\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.720827 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtr5j\" (UniqueName: \"kubernetes.io/projected/38dadf77-6280-4705-ab05-ade696a9d784-kube-api-access-gtr5j\") pod \"heat-api-8cf7dff68-b7rsd\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.720906 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-config-data-custom\") pod \"heat-engine-586d869b9-5wnvb\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.720974 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-config-data\") pod \"heat-engine-586d869b9-5wnvb\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.721181 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-combined-ca-bundle\") pod \"heat-api-8cf7dff68-b7rsd\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.721325 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cghdg\" (UniqueName: \"kubernetes.io/projected/58bfac75-cfac-4404-b44b-1ca7b1a94442-kube-api-access-cghdg\") pod \"heat-engine-586d869b9-5wnvb\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.823579 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-config-data-custom\") pod \"heat-cfnapi-7556978694-8gc2j\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.823629 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-combined-ca-bundle\") pod \"heat-api-8cf7dff68-b7rsd\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.823692 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cghdg\" (UniqueName: \"kubernetes.io/projected/58bfac75-cfac-4404-b44b-1ca7b1a94442-kube-api-access-cghdg\") pod \"heat-engine-586d869b9-5wnvb\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.823733 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-combined-ca-bundle\") pod \"heat-engine-586d869b9-5wnvb\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.823764 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-config-data-custom\") pod \"heat-api-8cf7dff68-b7rsd\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.823803 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-combined-ca-bundle\") pod \"heat-cfnapi-7556978694-8gc2j\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.823834 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-config-data\") pod \"heat-api-8cf7dff68-b7rsd\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.823888 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj5n7\" (UniqueName: \"kubernetes.io/projected/b4d303f1-34b6-4086-b60a-819cc4b8d96a-kube-api-access-nj5n7\") pod \"heat-cfnapi-7556978694-8gc2j\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.823911 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-config-data\") pod \"heat-cfnapi-7556978694-8gc2j\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.823941 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtr5j\" (UniqueName: \"kubernetes.io/projected/38dadf77-6280-4705-ab05-ade696a9d784-kube-api-access-gtr5j\") pod \"heat-api-8cf7dff68-b7rsd\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.823972 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-config-data-custom\") pod \"heat-engine-586d869b9-5wnvb\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.824056 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-config-data\") pod \"heat-engine-586d869b9-5wnvb\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.830915 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-config-data\") pod \"heat-engine-586d869b9-5wnvb\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.830944 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-config-data-custom\") pod \"heat-engine-586d869b9-5wnvb\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.832874 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-combined-ca-bundle\") pod \"heat-engine-586d869b9-5wnvb\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.833459 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-config-data\") pod \"heat-api-8cf7dff68-b7rsd\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.834996 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-config-data-custom\") pod \"heat-api-8cf7dff68-b7rsd\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.841928 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-combined-ca-bundle\") pod \"heat-api-8cf7dff68-b7rsd\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.842259 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cghdg\" (UniqueName: \"kubernetes.io/projected/58bfac75-cfac-4404-b44b-1ca7b1a94442-kube-api-access-cghdg\") pod \"heat-engine-586d869b9-5wnvb\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.847202 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtr5j\" (UniqueName: \"kubernetes.io/projected/38dadf77-6280-4705-ab05-ade696a9d784-kube-api-access-gtr5j\") pod \"heat-api-8cf7dff68-b7rsd\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.854375 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.894081 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.927968 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj5n7\" (UniqueName: \"kubernetes.io/projected/b4d303f1-34b6-4086-b60a-819cc4b8d96a-kube-api-access-nj5n7\") pod \"heat-cfnapi-7556978694-8gc2j\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.928034 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-config-data\") pod \"heat-cfnapi-7556978694-8gc2j\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.928164 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-config-data-custom\") pod \"heat-cfnapi-7556978694-8gc2j\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.929305 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-combined-ca-bundle\") pod \"heat-cfnapi-7556978694-8gc2j\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.935944 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-combined-ca-bundle\") pod \"heat-cfnapi-7556978694-8gc2j\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.937617 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-config-data-custom\") pod \"heat-cfnapi-7556978694-8gc2j\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.939722 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-config-data\") pod \"heat-cfnapi-7556978694-8gc2j\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.949953 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj5n7\" (UniqueName: \"kubernetes.io/projected/b4d303f1-34b6-4086-b60a-819cc4b8d96a-kube-api-access-nj5n7\") pod \"heat-cfnapi-7556978694-8gc2j\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:47 crc kubenswrapper[5024]: I1128 17:23:47.956797 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.595567 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a41a632a-a62a-4fa6-8326-d916ab8939e5","Type":"ContainerDied","Data":"386caeabb788602cfad77065c7b4e990f2911d467bca27128cb528afd9e64749"} Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.595955 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="386caeabb788602cfad77065c7b4e990f2911d467bca27128cb528afd9e64749" Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.607516 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.660453 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8cf7dff68-b7rsd"] Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.665643 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-config-data\") pod \"a41a632a-a62a-4fa6-8326-d916ab8939e5\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.665866 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a41a632a-a62a-4fa6-8326-d916ab8939e5-log-httpd\") pod \"a41a632a-a62a-4fa6-8326-d916ab8939e5\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.665922 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpm2z\" (UniqueName: \"kubernetes.io/projected/a41a632a-a62a-4fa6-8326-d916ab8939e5-kube-api-access-fpm2z\") pod \"a41a632a-a62a-4fa6-8326-d916ab8939e5\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.665945 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-scripts\") pod \"a41a632a-a62a-4fa6-8326-d916ab8939e5\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.666006 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-sg-core-conf-yaml\") pod \"a41a632a-a62a-4fa6-8326-d916ab8939e5\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.666221 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-combined-ca-bundle\") pod \"a41a632a-a62a-4fa6-8326-d916ab8939e5\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.666249 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a41a632a-a62a-4fa6-8326-d916ab8939e5-run-httpd\") pod \"a41a632a-a62a-4fa6-8326-d916ab8939e5\" (UID: \"a41a632a-a62a-4fa6-8326-d916ab8939e5\") " Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.667199 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a41a632a-a62a-4fa6-8326-d916ab8939e5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a41a632a-a62a-4fa6-8326-d916ab8939e5" (UID: "a41a632a-a62a-4fa6-8326-d916ab8939e5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.667389 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a41a632a-a62a-4fa6-8326-d916ab8939e5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a41a632a-a62a-4fa6-8326-d916ab8939e5" (UID: "a41a632a-a62a-4fa6-8326-d916ab8939e5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.678591 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-scripts" (OuterVolumeSpecName: "scripts") pod "a41a632a-a62a-4fa6-8326-d916ab8939e5" (UID: "a41a632a-a62a-4fa6-8326-d916ab8939e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.693816 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a41a632a-a62a-4fa6-8326-d916ab8939e5-kube-api-access-fpm2z" (OuterVolumeSpecName: "kube-api-access-fpm2z") pod "a41a632a-a62a-4fa6-8326-d916ab8939e5" (UID: "a41a632a-a62a-4fa6-8326-d916ab8939e5"). InnerVolumeSpecName "kube-api-access-fpm2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.709497 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-586d869b9-5wnvb"] Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.777849 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a41a632a-a62a-4fa6-8326-d916ab8939e5-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.777883 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpm2z\" (UniqueName: \"kubernetes.io/projected/a41a632a-a62a-4fa6-8326-d916ab8939e5-kube-api-access-fpm2z\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.777895 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.777903 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a41a632a-a62a-4fa6-8326-d916ab8939e5-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.915178 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a41a632a-a62a-4fa6-8326-d916ab8939e5" (UID: "a41a632a-a62a-4fa6-8326-d916ab8939e5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.918183 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:48 crc kubenswrapper[5024]: I1128 17:23:48.997165 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a41a632a-a62a-4fa6-8326-d916ab8939e5" (UID: "a41a632a-a62a-4fa6-8326-d916ab8939e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.014896 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7556978694-8gc2j"] Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.019911 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.041555 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.041672 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.115172 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-config-data" (OuterVolumeSpecName: "config-data") pod "a41a632a-a62a-4fa6-8326-d916ab8939e5" (UID: "a41a632a-a62a-4fa6-8326-d916ab8939e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.123430 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41a632a-a62a-4fa6-8326-d916ab8939e5-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.181485 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.204818 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.204950 5024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.238992 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.362197 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.754741 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-qgvxf"] Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.755050 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" podUID="5449af6d-cc03-476d-b27c-b2932a79761b" containerName="dnsmasq-dns" containerID="cri-o://c7106fb736d08ea618744a44ef20674750bafed77585fddc9b4ea6b179f532b3" gracePeriod=10 Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.766286 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8cf7dff68-b7rsd" event={"ID":"38dadf77-6280-4705-ab05-ade696a9d784","Type":"ContainerStarted","Data":"c189d7f9ddbb847720f79e20e4213c0bce74e51578f1beb8febe10562b02f8ac"} Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.779172 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7556978694-8gc2j" event={"ID":"b4d303f1-34b6-4086-b60a-819cc4b8d96a","Type":"ContainerStarted","Data":"9e7fde4cdba9980941943cd2bcebb314fbb73b501a1e9ffbf8d55c9c5dba7470"} Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.830542 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wbkl5" event={"ID":"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda","Type":"ContainerStarted","Data":"dbc6fd54ff2ea252838163385d27ea88fbfbda8e78f78f4d72c0834d47bfeb5e"} Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.847177 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:23:49 crc kubenswrapper[5024]: I1128 17:23:49.849153 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-586d869b9-5wnvb" event={"ID":"58bfac75-cfac-4404-b44b-1ca7b1a94442","Type":"ContainerStarted","Data":"a1fded5ac06ce7021198701da541a1154bdfb58976ff28b0de850030d006a83d"} Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.119340 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.181153 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.224305 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:50 crc kubenswrapper[5024]: E1128 17:23:50.225782 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="proxy-httpd" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.225820 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="proxy-httpd" Nov 28 17:23:50 crc kubenswrapper[5024]: E1128 17:23:50.225837 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="sg-core" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.225844 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="sg-core" Nov 28 17:23:50 crc kubenswrapper[5024]: E1128 17:23:50.225887 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="ceilometer-notification-agent" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.225893 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="ceilometer-notification-agent" Nov 28 17:23:50 crc kubenswrapper[5024]: E1128 17:23:50.225915 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="ceilometer-central-agent" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.225921 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="ceilometer-central-agent" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.226796 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="sg-core" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.226823 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="ceilometer-notification-agent" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.226841 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="ceilometer-central-agent" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.226885 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" containerName="proxy-httpd" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.235098 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.240525 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.244464 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.262180 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.387815 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-config-data\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.387888 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw42b\" (UniqueName: \"kubernetes.io/projected/6f0170ba-8387-4ac8-ab60-2253e69be992-kube-api-access-bw42b\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.387945 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f0170ba-8387-4ac8-ab60-2253e69be992-log-httpd\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.387972 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-scripts\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.388003 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.388209 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f0170ba-8387-4ac8-ab60-2253e69be992-run-httpd\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.388290 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.492211 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f0170ba-8387-4ac8-ab60-2253e69be992-run-httpd\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.492304 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.492454 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-config-data\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.492494 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw42b\" (UniqueName: \"kubernetes.io/projected/6f0170ba-8387-4ac8-ab60-2253e69be992-kube-api-access-bw42b\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.492546 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-scripts\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.492570 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f0170ba-8387-4ac8-ab60-2253e69be992-log-httpd\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.492610 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.492776 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f0170ba-8387-4ac8-ab60-2253e69be992-run-httpd\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.496638 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f0170ba-8387-4ac8-ab60-2253e69be992-log-httpd\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.500005 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.500236 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-scripts\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.501602 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-config-data\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.514155 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.554293 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw42b\" (UniqueName: \"kubernetes.io/projected/6f0170ba-8387-4ac8-ab60-2253e69be992-kube-api-access-bw42b\") pod \"ceilometer-0\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.559497 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a41a632a-a62a-4fa6-8326-d916ab8939e5" path="/var/lib/kubelet/pods/a41a632a-a62a-4fa6-8326-d916ab8939e5/volumes" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.623305 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.868257 5024 generic.go:334] "Generic (PLEG): container finished" podID="5449af6d-cc03-476d-b27c-b2932a79761b" containerID="c7106fb736d08ea618744a44ef20674750bafed77585fddc9b4ea6b179f532b3" exitCode=0 Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.868334 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" event={"ID":"5449af6d-cc03-476d-b27c-b2932a79761b","Type":"ContainerDied","Data":"c7106fb736d08ea618744a44ef20674750bafed77585fddc9b4ea6b179f532b3"} Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.878195 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-586d869b9-5wnvb" event={"ID":"58bfac75-cfac-4404-b44b-1ca7b1a94442","Type":"ContainerStarted","Data":"15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76"} Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.879546 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.916704 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8cf7dff68-b7rsd" event={"ID":"38dadf77-6280-4705-ab05-ade696a9d784","Type":"ContainerStarted","Data":"f5630e4a9450ce5517d2947bdcc9636f734ba293e90311e1770fb3392242253a"} Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.922211 5024 scope.go:117] "RemoveContainer" containerID="f5630e4a9450ce5517d2947bdcc9636f734ba293e90311e1770fb3392242253a" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.931172 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7556978694-8gc2j" event={"ID":"b4d303f1-34b6-4086-b60a-819cc4b8d96a","Type":"ContainerStarted","Data":"3c18584e6222a37b5feb50aa9a8d161079eafb6b9723e3b3073ec272bd6522a5"} Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.931211 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:50 crc kubenswrapper[5024]: I1128 17:23:50.933866 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-586d869b9-5wnvb" podStartSLOduration=3.933841185 podStartE2EDuration="3.933841185s" podCreationTimestamp="2025-11-28 17:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:50.90545442 +0000 UTC m=+1532.954375345" watchObservedRunningTime="2025-11-28 17:23:50.933841185 +0000 UTC m=+1532.982762090" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.023601 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7556978694-8gc2j" podStartSLOduration=4.02357402 podStartE2EDuration="4.02357402s" podCreationTimestamp="2025-11-28 17:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:50.985785348 +0000 UTC m=+1533.034706253" watchObservedRunningTime="2025-11-28 17:23:51.02357402 +0000 UTC m=+1533.072494935" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.126992 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.236043 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-dns-svc\") pod \"5449af6d-cc03-476d-b27c-b2932a79761b\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.236177 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jfms\" (UniqueName: \"kubernetes.io/projected/5449af6d-cc03-476d-b27c-b2932a79761b-kube-api-access-5jfms\") pod \"5449af6d-cc03-476d-b27c-b2932a79761b\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.236244 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-dns-swift-storage-0\") pod \"5449af6d-cc03-476d-b27c-b2932a79761b\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.236380 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-config\") pod \"5449af6d-cc03-476d-b27c-b2932a79761b\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.236426 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-ovsdbserver-sb\") pod \"5449af6d-cc03-476d-b27c-b2932a79761b\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.236488 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-ovsdbserver-nb\") pod \"5449af6d-cc03-476d-b27c-b2932a79761b\" (UID: \"5449af6d-cc03-476d-b27c-b2932a79761b\") " Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.246371 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5449af6d-cc03-476d-b27c-b2932a79761b-kube-api-access-5jfms" (OuterVolumeSpecName: "kube-api-access-5jfms") pod "5449af6d-cc03-476d-b27c-b2932a79761b" (UID: "5449af6d-cc03-476d-b27c-b2932a79761b"). InnerVolumeSpecName "kube-api-access-5jfms". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.346513 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jfms\" (UniqueName: \"kubernetes.io/projected/5449af6d-cc03-476d-b27c-b2932a79761b-kube-api-access-5jfms\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.410752 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5449af6d-cc03-476d-b27c-b2932a79761b" (UID: "5449af6d-cc03-476d-b27c-b2932a79761b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.415291 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5449af6d-cc03-476d-b27c-b2932a79761b" (UID: "5449af6d-cc03-476d-b27c-b2932a79761b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.426622 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-config" (OuterVolumeSpecName: "config") pod "5449af6d-cc03-476d-b27c-b2932a79761b" (UID: "5449af6d-cc03-476d-b27c-b2932a79761b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.438185 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.443163 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5449af6d-cc03-476d-b27c-b2932a79761b" (UID: "5449af6d-cc03-476d-b27c-b2932a79761b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.457777 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.458010 5024 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.458090 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.458158 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.472097 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5449af6d-cc03-476d-b27c-b2932a79761b" (UID: "5449af6d-cc03-476d-b27c-b2932a79761b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.562509 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5449af6d-cc03-476d-b27c-b2932a79761b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.829270 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="f9ace56d-5740-45f2-b8ac-04c2ed9b4270" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.210:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.942903 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f0170ba-8387-4ac8-ab60-2253e69be992","Type":"ContainerStarted","Data":"1c4570dbb6a1218dd00791cc49a647918bd238723251282f9fb87c6d17a92b71"} Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.945343 5024 generic.go:334] "Generic (PLEG): container finished" podID="38dadf77-6280-4705-ab05-ade696a9d784" containerID="f5630e4a9450ce5517d2947bdcc9636f734ba293e90311e1770fb3392242253a" exitCode=1 Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.945366 5024 generic.go:334] "Generic (PLEG): container finished" podID="38dadf77-6280-4705-ab05-ade696a9d784" containerID="4f4f414cfefa969932c58feb0d6b0bf943bdc441a8aaf8b99171c22714230884" exitCode=1 Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.945482 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8cf7dff68-b7rsd" event={"ID":"38dadf77-6280-4705-ab05-ade696a9d784","Type":"ContainerDied","Data":"f5630e4a9450ce5517d2947bdcc9636f734ba293e90311e1770fb3392242253a"} Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.945587 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8cf7dff68-b7rsd" event={"ID":"38dadf77-6280-4705-ab05-ade696a9d784","Type":"ContainerDied","Data":"4f4f414cfefa969932c58feb0d6b0bf943bdc441a8aaf8b99171c22714230884"} Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.945610 5024 scope.go:117] "RemoveContainer" containerID="f5630e4a9450ce5517d2947bdcc9636f734ba293e90311e1770fb3392242253a" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.946402 5024 scope.go:117] "RemoveContainer" containerID="4f4f414cfefa969932c58feb0d6b0bf943bdc441a8aaf8b99171c22714230884" Nov 28 17:23:51 crc kubenswrapper[5024]: E1128 17:23:51.946791 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8cf7dff68-b7rsd_openstack(38dadf77-6280-4705-ab05-ade696a9d784)\"" pod="openstack/heat-api-8cf7dff68-b7rsd" podUID="38dadf77-6280-4705-ab05-ade696a9d784" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.948427 5024 generic.go:334] "Generic (PLEG): container finished" podID="b4d303f1-34b6-4086-b60a-819cc4b8d96a" containerID="3c18584e6222a37b5feb50aa9a8d161079eafb6b9723e3b3073ec272bd6522a5" exitCode=1 Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.948638 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7556978694-8gc2j" event={"ID":"b4d303f1-34b6-4086-b60a-819cc4b8d96a","Type":"ContainerDied","Data":"3c18584e6222a37b5feb50aa9a8d161079eafb6b9723e3b3073ec272bd6522a5"} Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.952967 5024 generic.go:334] "Generic (PLEG): container finished" podID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" containerID="dbc6fd54ff2ea252838163385d27ea88fbfbda8e78f78f4d72c0834d47bfeb5e" exitCode=0 Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.953067 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wbkl5" event={"ID":"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda","Type":"ContainerDied","Data":"dbc6fd54ff2ea252838163385d27ea88fbfbda8e78f78f4d72c0834d47bfeb5e"} Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.954248 5024 scope.go:117] "RemoveContainer" containerID="3c18584e6222a37b5feb50aa9a8d161079eafb6b9723e3b3073ec272bd6522a5" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.962834 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" Nov 28 17:23:51 crc kubenswrapper[5024]: I1128 17:23:51.963678 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-qgvxf" event={"ID":"5449af6d-cc03-476d-b27c-b2932a79761b","Type":"ContainerDied","Data":"0731a69ef0828fca883faa5ed55549c5b9d7ec8ac701ddcabfe8ff2ab53ca4fc"} Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.071991 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-qgvxf"] Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.083158 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-qgvxf"] Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.090374 5024 scope.go:117] "RemoveContainer" containerID="f5630e4a9450ce5517d2947bdcc9636f734ba293e90311e1770fb3392242253a" Nov 28 17:23:52 crc kubenswrapper[5024]: E1128 17:23:52.092790 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5630e4a9450ce5517d2947bdcc9636f734ba293e90311e1770fb3392242253a\": container with ID starting with f5630e4a9450ce5517d2947bdcc9636f734ba293e90311e1770fb3392242253a not found: ID does not exist" containerID="f5630e4a9450ce5517d2947bdcc9636f734ba293e90311e1770fb3392242253a" Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.092842 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5630e4a9450ce5517d2947bdcc9636f734ba293e90311e1770fb3392242253a"} err="failed to get container status \"f5630e4a9450ce5517d2947bdcc9636f734ba293e90311e1770fb3392242253a\": rpc error: code = NotFound desc = could not find container \"f5630e4a9450ce5517d2947bdcc9636f734ba293e90311e1770fb3392242253a\": container with ID starting with f5630e4a9450ce5517d2947bdcc9636f734ba293e90311e1770fb3392242253a not found: ID does not exist" Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.092864 5024 scope.go:117] "RemoveContainer" containerID="c7106fb736d08ea618744a44ef20674750bafed77585fddc9b4ea6b179f532b3" Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.162896 5024 scope.go:117] "RemoveContainer" containerID="e5968e78e28594d50ec8ba60a7d8c481840b7f21e8f3044de5fb2d53275b3e30" Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.595803 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5449af6d-cc03-476d-b27c-b2932a79761b" path="/var/lib/kubelet/pods/5449af6d-cc03-476d-b27c-b2932a79761b/volumes" Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.825281 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="f9ace56d-5740-45f2-b8ac-04c2ed9b4270" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.210:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.895085 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.895190 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.958141 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.975726 5024 generic.go:334] "Generic (PLEG): container finished" podID="b4d303f1-34b6-4086-b60a-819cc4b8d96a" containerID="6be23a480d602103e65971e93d980e604220404a27adfaf1624c42efed869f85" exitCode=1 Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.975799 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7556978694-8gc2j" event={"ID":"b4d303f1-34b6-4086-b60a-819cc4b8d96a","Type":"ContainerDied","Data":"6be23a480d602103e65971e93d980e604220404a27adfaf1624c42efed869f85"} Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.975884 5024 scope.go:117] "RemoveContainer" containerID="3c18584e6222a37b5feb50aa9a8d161079eafb6b9723e3b3073ec272bd6522a5" Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.976600 5024 scope.go:117] "RemoveContainer" containerID="6be23a480d602103e65971e93d980e604220404a27adfaf1624c42efed869f85" Nov 28 17:23:52 crc kubenswrapper[5024]: E1128 17:23:52.977123 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7556978694-8gc2j_openstack(b4d303f1-34b6-4086-b60a-819cc4b8d96a)\"" pod="openstack/heat-cfnapi-7556978694-8gc2j" podUID="b4d303f1-34b6-4086-b60a-819cc4b8d96a" Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.979809 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wbkl5" event={"ID":"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda","Type":"ContainerStarted","Data":"f819444ee1344f64c4023c683326591f763652d6dfbe0bf2896b21093215cabe"} Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.987190 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f0170ba-8387-4ac8-ab60-2253e69be992","Type":"ContainerStarted","Data":"e460a87d0002d11afbd87ca2e921933a5c8b762068e7a02ad2e284c3f5aaaa4d"} Nov 28 17:23:52 crc kubenswrapper[5024]: I1128 17:23:52.989920 5024 scope.go:117] "RemoveContainer" containerID="4f4f414cfefa969932c58feb0d6b0bf943bdc441a8aaf8b99171c22714230884" Nov 28 17:23:52 crc kubenswrapper[5024]: E1128 17:23:52.990323 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8cf7dff68-b7rsd_openstack(38dadf77-6280-4705-ab05-ade696a9d784)\"" pod="openstack/heat-api-8cf7dff68-b7rsd" podUID="38dadf77-6280-4705-ab05-ade696a9d784" Nov 28 17:23:53 crc kubenswrapper[5024]: I1128 17:23:53.008869 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:23:53 crc kubenswrapper[5024]: I1128 17:23:53.008920 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:23:53 crc kubenswrapper[5024]: I1128 17:23:53.062579 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wbkl5" podStartSLOduration=4.212981623 podStartE2EDuration="11.062558498s" podCreationTimestamp="2025-11-28 17:23:42 +0000 UTC" firstStartedPulling="2025-11-28 17:23:45.527630465 +0000 UTC m=+1527.576551370" lastFinishedPulling="2025-11-28 17:23:52.37720734 +0000 UTC m=+1534.426128245" observedRunningTime="2025-11-28 17:23:53.030976453 +0000 UTC m=+1535.079897358" watchObservedRunningTime="2025-11-28 17:23:53.062558498 +0000 UTC m=+1535.111479403" Nov 28 17:23:53 crc kubenswrapper[5024]: I1128 17:23:53.687242 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:23:53 crc kubenswrapper[5024]: I1128 17:23:53.864697 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:23:53 crc kubenswrapper[5024]: I1128 17:23:53.982071 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8cf7dff68-b7rsd"] Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.045193 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6cb795f57c-7826b"] Nov 28 17:23:54 crc kubenswrapper[5024]: E1128 17:23:54.046049 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5449af6d-cc03-476d-b27c-b2932a79761b" containerName="init" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.046061 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5449af6d-cc03-476d-b27c-b2932a79761b" containerName="init" Nov 28 17:23:54 crc kubenswrapper[5024]: E1128 17:23:54.046083 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5449af6d-cc03-476d-b27c-b2932a79761b" containerName="dnsmasq-dns" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.046089 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5449af6d-cc03-476d-b27c-b2932a79761b" containerName="dnsmasq-dns" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.046327 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5449af6d-cc03-476d-b27c-b2932a79761b" containerName="dnsmasq-dns" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.047168 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.058597 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.058977 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.076959 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7556978694-8gc2j"] Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.112510 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wbkl5" podUID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" containerName="registry-server" probeResult="failure" output=< Nov 28 17:23:54 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 17:23:54 crc kubenswrapper[5024]: > Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.114095 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6cb795f57c-7826b"] Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.115119 5024 scope.go:117] "RemoveContainer" containerID="6be23a480d602103e65971e93d980e604220404a27adfaf1624c42efed869f85" Nov 28 17:23:54 crc kubenswrapper[5024]: E1128 17:23:54.115494 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7556978694-8gc2j_openstack(b4d303f1-34b6-4086-b60a-819cc4b8d96a)\"" pod="openstack/heat-cfnapi-7556978694-8gc2j" podUID="b4d303f1-34b6-4086-b60a-819cc4b8d96a" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.138060 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6cdbcf9767-2dvsc"] Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.139707 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.144416 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.144664 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.167322 5024 scope.go:117] "RemoveContainer" containerID="4f4f414cfefa969932c58feb0d6b0bf943bdc441a8aaf8b99171c22714230884" Nov 28 17:23:54 crc kubenswrapper[5024]: E1128 17:23:54.167588 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8cf7dff68-b7rsd_openstack(38dadf77-6280-4705-ab05-ade696a9d784)\"" pod="openstack/heat-api-8cf7dff68-b7rsd" podUID="38dadf77-6280-4705-ab05-ade696a9d784" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.167895 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f0170ba-8387-4ac8-ab60-2253e69be992","Type":"ContainerStarted","Data":"8e700896823d75942f04961d215d655851c0de16c8fec02a6cb7dda32816772a"} Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.171930 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6cdbcf9767-2dvsc"] Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.177612 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-config-data\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.177663 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-public-tls-certs\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.177796 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-config-data-custom\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.177836 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-combined-ca-bundle\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.177860 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gffkv\" (UniqueName: \"kubernetes.io/projected/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-kube-api-access-gffkv\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.177997 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-internal-tls-certs\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.283422 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-public-tls-certs\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.283529 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-config-data\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.283569 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-public-tls-certs\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.283675 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c2s8\" (UniqueName: \"kubernetes.io/projected/d4319898-7040-4c0c-b5eb-d2eabe093afb-kube-api-access-2c2s8\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.283832 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-config-data\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.283868 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-config-data-custom\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.283919 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-combined-ca-bundle\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.283951 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gffkv\" (UniqueName: \"kubernetes.io/projected/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-kube-api-access-gffkv\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.284047 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-internal-tls-certs\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.284229 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-internal-tls-certs\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.284278 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-combined-ca-bundle\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.284395 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-config-data-custom\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.417879 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-config-data\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.426689 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-config-data\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.426793 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-internal-tls-certs\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.426874 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-combined-ca-bundle\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.426932 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-config-data-custom\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.427036 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-public-tls-certs\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.427116 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c2s8\" (UniqueName: \"kubernetes.io/projected/d4319898-7040-4c0c-b5eb-d2eabe093afb-kube-api-access-2c2s8\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.428967 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-config-data-custom\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.434922 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-config-data-custom\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.435899 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-internal-tls-certs\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.456866 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gffkv\" (UniqueName: \"kubernetes.io/projected/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-kube-api-access-gffkv\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.457165 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-internal-tls-certs\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.473195 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-public-tls-certs\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.486889 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c2s8\" (UniqueName: \"kubernetes.io/projected/d4319898-7040-4c0c-b5eb-d2eabe093afb-kube-api-access-2c2s8\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.491359 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-combined-ca-bundle\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.491814 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-public-tls-certs\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.493124 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-combined-ca-bundle\") pod \"heat-api-6cb795f57c-7826b\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.494257 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-config-data\") pod \"heat-cfnapi-6cdbcf9767-2dvsc\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.593672 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:54 crc kubenswrapper[5024]: I1128 17:23:54.734484 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:55 crc kubenswrapper[5024]: I1128 17:23:55.063651 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6cb795f57c-7826b"] Nov 28 17:23:55 crc kubenswrapper[5024]: I1128 17:23:55.209896 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f0170ba-8387-4ac8-ab60-2253e69be992","Type":"ContainerStarted","Data":"794b5b1e57e10771890a659302e46f7fffb97090fa861905344ecb02361fb4ea"} Nov 28 17:23:55 crc kubenswrapper[5024]: I1128 17:23:55.218471 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cb795f57c-7826b" event={"ID":"eda77528-b4c2-4529-9293-f5bf3c7aeb5a","Type":"ContainerStarted","Data":"26f9705a1ee0060f49f467e7edb9f5c17ef28420b2c1cbf0f20560c590f25f74"} Nov 28 17:23:55 crc kubenswrapper[5024]: I1128 17:23:55.675422 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6cdbcf9767-2dvsc"] Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.246329 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cb795f57c-7826b" event={"ID":"eda77528-b4c2-4529-9293-f5bf3c7aeb5a","Type":"ContainerStarted","Data":"7abc8dbbc9f6e002ea8839c28dbfa5350914a54bd42034d95b2eb3a409f662dd"} Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.247175 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.261104 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" event={"ID":"d4319898-7040-4c0c-b5eb-d2eabe093afb","Type":"ContainerStarted","Data":"4a751efb5ce220d58d1727b0c975505fa27ee6bebb9e7ea15cb59c81b28af867"} Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.335178 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6cb795f57c-7826b" podStartSLOduration=3.335151474 podStartE2EDuration="3.335151474s" podCreationTimestamp="2025-11-28 17:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:56.274530845 +0000 UTC m=+1538.323451770" watchObservedRunningTime="2025-11-28 17:23:56.335151474 +0000 UTC m=+1538.384072379" Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.413264 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.597585 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-config-data\") pod \"38dadf77-6280-4705-ab05-ade696a9d784\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.597952 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-combined-ca-bundle\") pod \"38dadf77-6280-4705-ab05-ade696a9d784\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.597977 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-config-data-custom\") pod \"38dadf77-6280-4705-ab05-ade696a9d784\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.598041 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtr5j\" (UniqueName: \"kubernetes.io/projected/38dadf77-6280-4705-ab05-ade696a9d784-kube-api-access-gtr5j\") pod \"38dadf77-6280-4705-ab05-ade696a9d784\" (UID: \"38dadf77-6280-4705-ab05-ade696a9d784\") " Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.612558 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38dadf77-6280-4705-ab05-ade696a9d784-kube-api-access-gtr5j" (OuterVolumeSpecName: "kube-api-access-gtr5j") pod "38dadf77-6280-4705-ab05-ade696a9d784" (UID: "38dadf77-6280-4705-ab05-ade696a9d784"). InnerVolumeSpecName "kube-api-access-gtr5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.615786 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "38dadf77-6280-4705-ab05-ade696a9d784" (UID: "38dadf77-6280-4705-ab05-ade696a9d784"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.674192 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38dadf77-6280-4705-ab05-ade696a9d784" (UID: "38dadf77-6280-4705-ab05-ade696a9d784"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.702906 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.702928 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.702937 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtr5j\" (UniqueName: \"kubernetes.io/projected/38dadf77-6280-4705-ab05-ade696a9d784-kube-api-access-gtr5j\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.807931 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.850129 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="f9ace56d-5740-45f2-b8ac-04c2ed9b4270" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.210:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.909210 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-combined-ca-bundle\") pod \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.909326 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-config-data\") pod \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.909394 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj5n7\" (UniqueName: \"kubernetes.io/projected/b4d303f1-34b6-4086-b60a-819cc4b8d96a-kube-api-access-nj5n7\") pod \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.909595 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-config-data-custom\") pod \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\" (UID: \"b4d303f1-34b6-4086-b60a-819cc4b8d96a\") " Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.944447 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d303f1-34b6-4086-b60a-819cc4b8d96a-kube-api-access-nj5n7" (OuterVolumeSpecName: "kube-api-access-nj5n7") pod "b4d303f1-34b6-4086-b60a-819cc4b8d96a" (UID: "b4d303f1-34b6-4086-b60a-819cc4b8d96a"). InnerVolumeSpecName "kube-api-access-nj5n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:56 crc kubenswrapper[5024]: I1128 17:23:56.973647 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b4d303f1-34b6-4086-b60a-819cc4b8d96a" (UID: "b4d303f1-34b6-4086-b60a-819cc4b8d96a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.001278 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-config-data" (OuterVolumeSpecName: "config-data") pod "38dadf77-6280-4705-ab05-ade696a9d784" (UID: "38dadf77-6280-4705-ab05-ade696a9d784"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.023996 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj5n7\" (UniqueName: \"kubernetes.io/projected/b4d303f1-34b6-4086-b60a-819cc4b8d96a-kube-api-access-nj5n7\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.024041 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38dadf77-6280-4705-ab05-ade696a9d784-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.024052 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.040179 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4d303f1-34b6-4086-b60a-819cc4b8d96a" (UID: "b4d303f1-34b6-4086-b60a-819cc4b8d96a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.098662 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-config-data" (OuterVolumeSpecName: "config-data") pod "b4d303f1-34b6-4086-b60a-819cc4b8d96a" (UID: "b4d303f1-34b6-4086-b60a-819cc4b8d96a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.125980 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.126045 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d303f1-34b6-4086-b60a-819cc4b8d96a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.273089 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f0170ba-8387-4ac8-ab60-2253e69be992","Type":"ContainerStarted","Data":"dc80e217916999fb51186de0e0223ba43dcf51e62c054377915a39bc69df6415"} Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.273254 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.276055 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8cf7dff68-b7rsd" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.276052 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8cf7dff68-b7rsd" event={"ID":"38dadf77-6280-4705-ab05-ade696a9d784","Type":"ContainerDied","Data":"c189d7f9ddbb847720f79e20e4213c0bce74e51578f1beb8febe10562b02f8ac"} Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.276484 5024 scope.go:117] "RemoveContainer" containerID="4f4f414cfefa969932c58feb0d6b0bf943bdc441a8aaf8b99171c22714230884" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.278556 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7556978694-8gc2j" event={"ID":"b4d303f1-34b6-4086-b60a-819cc4b8d96a","Type":"ContainerDied","Data":"9e7fde4cdba9980941943cd2bcebb314fbb73b501a1e9ffbf8d55c9c5dba7470"} Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.278557 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7556978694-8gc2j" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.285360 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" event={"ID":"d4319898-7040-4c0c-b5eb-d2eabe093afb","Type":"ContainerStarted","Data":"38d5544cc8a3c600f7dcfd6be11af667c4faf9d1b79a85233c540966e7b0819a"} Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.285400 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.322418 5024 scope.go:117] "RemoveContainer" containerID="6be23a480d602103e65971e93d980e604220404a27adfaf1624c42efed869f85" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.322819 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.519373443 podStartE2EDuration="7.322795176s" podCreationTimestamp="2025-11-28 17:23:50 +0000 UTC" firstStartedPulling="2025-11-28 17:23:51.416506313 +0000 UTC m=+1533.465427218" lastFinishedPulling="2025-11-28 17:23:56.219928046 +0000 UTC m=+1538.268848951" observedRunningTime="2025-11-28 17:23:57.310526578 +0000 UTC m=+1539.359447483" watchObservedRunningTime="2025-11-28 17:23:57.322795176 +0000 UTC m=+1539.371716081" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.414011 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" podStartSLOduration=4.413987742 podStartE2EDuration="4.413987742s" podCreationTimestamp="2025-11-28 17:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:57.33811759 +0000 UTC m=+1539.387038495" watchObservedRunningTime="2025-11-28 17:23:57.413987742 +0000 UTC m=+1539.462908647" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.487651 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7556978694-8gc2j"] Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.500779 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7556978694-8gc2j"] Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.511256 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8cf7dff68-b7rsd"] Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.522072 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-8cf7dff68-b7rsd"] Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.832335 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="f9ace56d-5740-45f2-b8ac-04c2ed9b4270" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.210:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:23:57 crc kubenswrapper[5024]: I1128 17:23:57.855288 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 28 17:23:58 crc kubenswrapper[5024]: I1128 17:23:58.512448 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38dadf77-6280-4705-ab05-ade696a9d784" path="/var/lib/kubelet/pods/38dadf77-6280-4705-ab05-ade696a9d784/volumes" Nov 28 17:23:58 crc kubenswrapper[5024]: I1128 17:23:58.513233 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d303f1-34b6-4086-b60a-819cc4b8d96a" path="/var/lib/kubelet/pods/b4d303f1-34b6-4086-b60a-819cc4b8d96a/volumes" Nov 28 17:23:59 crc kubenswrapper[5024]: I1128 17:23:59.406429 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:24:01 crc kubenswrapper[5024]: I1128 17:24:01.715996 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:24:01 crc kubenswrapper[5024]: I1128 17:24:01.716344 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="ceilometer-central-agent" containerID="cri-o://e460a87d0002d11afbd87ca2e921933a5c8b762068e7a02ad2e284c3f5aaaa4d" gracePeriod=30 Nov 28 17:24:01 crc kubenswrapper[5024]: I1128 17:24:01.716384 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="proxy-httpd" containerID="cri-o://dc80e217916999fb51186de0e0223ba43dcf51e62c054377915a39bc69df6415" gracePeriod=30 Nov 28 17:24:01 crc kubenswrapper[5024]: I1128 17:24:01.716402 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="ceilometer-notification-agent" containerID="cri-o://8e700896823d75942f04961d215d655851c0de16c8fec02a6cb7dda32816772a" gracePeriod=30 Nov 28 17:24:01 crc kubenswrapper[5024]: I1128 17:24:01.716406 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="sg-core" containerID="cri-o://794b5b1e57e10771890a659302e46f7fffb97090fa861905344ecb02361fb4ea" gracePeriod=30 Nov 28 17:24:02 crc kubenswrapper[5024]: I1128 17:24:02.419825 5024 generic.go:334] "Generic (PLEG): container finished" podID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerID="dc80e217916999fb51186de0e0223ba43dcf51e62c054377915a39bc69df6415" exitCode=0 Nov 28 17:24:02 crc kubenswrapper[5024]: I1128 17:24:02.420086 5024 generic.go:334] "Generic (PLEG): container finished" podID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerID="794b5b1e57e10771890a659302e46f7fffb97090fa861905344ecb02361fb4ea" exitCode=2 Nov 28 17:24:02 crc kubenswrapper[5024]: I1128 17:24:02.420095 5024 generic.go:334] "Generic (PLEG): container finished" podID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerID="8e700896823d75942f04961d215d655851c0de16c8fec02a6cb7dda32816772a" exitCode=0 Nov 28 17:24:02 crc kubenswrapper[5024]: I1128 17:24:02.419904 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f0170ba-8387-4ac8-ab60-2253e69be992","Type":"ContainerDied","Data":"dc80e217916999fb51186de0e0223ba43dcf51e62c054377915a39bc69df6415"} Nov 28 17:24:02 crc kubenswrapper[5024]: I1128 17:24:02.420146 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f0170ba-8387-4ac8-ab60-2253e69be992","Type":"ContainerDied","Data":"794b5b1e57e10771890a659302e46f7fffb97090fa861905344ecb02361fb4ea"} Nov 28 17:24:02 crc kubenswrapper[5024]: I1128 17:24:02.420157 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f0170ba-8387-4ac8-ab60-2253e69be992","Type":"ContainerDied","Data":"8e700896823d75942f04961d215d655851c0de16c8fec02a6cb7dda32816772a"} Nov 28 17:24:04 crc kubenswrapper[5024]: I1128 17:24:04.067592 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wbkl5" podUID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" containerName="registry-server" probeResult="failure" output=< Nov 28 17:24:04 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 17:24:04 crc kubenswrapper[5024]: > Nov 28 17:24:05 crc kubenswrapper[5024]: I1128 17:24:05.926664 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pb4xj"] Nov 28 17:24:05 crc kubenswrapper[5024]: E1128 17:24:05.927886 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38dadf77-6280-4705-ab05-ade696a9d784" containerName="heat-api" Nov 28 17:24:05 crc kubenswrapper[5024]: I1128 17:24:05.927906 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="38dadf77-6280-4705-ab05-ade696a9d784" containerName="heat-api" Nov 28 17:24:05 crc kubenswrapper[5024]: E1128 17:24:05.927930 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d303f1-34b6-4086-b60a-819cc4b8d96a" containerName="heat-cfnapi" Nov 28 17:24:05 crc kubenswrapper[5024]: I1128 17:24:05.927936 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d303f1-34b6-4086-b60a-819cc4b8d96a" containerName="heat-cfnapi" Nov 28 17:24:05 crc kubenswrapper[5024]: I1128 17:24:05.928319 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="38dadf77-6280-4705-ab05-ade696a9d784" containerName="heat-api" Nov 28 17:24:05 crc kubenswrapper[5024]: I1128 17:24:05.928347 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="38dadf77-6280-4705-ab05-ade696a9d784" containerName="heat-api" Nov 28 17:24:05 crc kubenswrapper[5024]: I1128 17:24:05.928360 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d303f1-34b6-4086-b60a-819cc4b8d96a" containerName="heat-cfnapi" Nov 28 17:24:05 crc kubenswrapper[5024]: I1128 17:24:05.928377 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d303f1-34b6-4086-b60a-819cc4b8d96a" containerName="heat-cfnapi" Nov 28 17:24:05 crc kubenswrapper[5024]: E1128 17:24:05.928579 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d303f1-34b6-4086-b60a-819cc4b8d96a" containerName="heat-cfnapi" Nov 28 17:24:05 crc kubenswrapper[5024]: I1128 17:24:05.928587 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d303f1-34b6-4086-b60a-819cc4b8d96a" containerName="heat-cfnapi" Nov 28 17:24:05 crc kubenswrapper[5024]: E1128 17:24:05.928619 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38dadf77-6280-4705-ab05-ade696a9d784" containerName="heat-api" Nov 28 17:24:05 crc kubenswrapper[5024]: I1128 17:24:05.928626 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="38dadf77-6280-4705-ab05-ade696a9d784" containerName="heat-api" Nov 28 17:24:05 crc kubenswrapper[5024]: I1128 17:24:05.930167 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:05 crc kubenswrapper[5024]: I1128 17:24:05.939948 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pb4xj"] Nov 28 17:24:06 crc kubenswrapper[5024]: I1128 17:24:06.053049 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvf92\" (UniqueName: \"kubernetes.io/projected/9ed317ff-91e9-4a99-beae-89c81fe8b551-kube-api-access-xvf92\") pod \"community-operators-pb4xj\" (UID: \"9ed317ff-91e9-4a99-beae-89c81fe8b551\") " pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:06 crc kubenswrapper[5024]: I1128 17:24:06.053163 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed317ff-91e9-4a99-beae-89c81fe8b551-catalog-content\") pod \"community-operators-pb4xj\" (UID: \"9ed317ff-91e9-4a99-beae-89c81fe8b551\") " pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:06 crc kubenswrapper[5024]: I1128 17:24:06.053491 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed317ff-91e9-4a99-beae-89c81fe8b551-utilities\") pod \"community-operators-pb4xj\" (UID: \"9ed317ff-91e9-4a99-beae-89c81fe8b551\") " pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:06 crc kubenswrapper[5024]: I1128 17:24:06.155811 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed317ff-91e9-4a99-beae-89c81fe8b551-utilities\") pod \"community-operators-pb4xj\" (UID: \"9ed317ff-91e9-4a99-beae-89c81fe8b551\") " pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:06 crc kubenswrapper[5024]: I1128 17:24:06.156087 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvf92\" (UniqueName: \"kubernetes.io/projected/9ed317ff-91e9-4a99-beae-89c81fe8b551-kube-api-access-xvf92\") pod \"community-operators-pb4xj\" (UID: \"9ed317ff-91e9-4a99-beae-89c81fe8b551\") " pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:06 crc kubenswrapper[5024]: I1128 17:24:06.156164 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed317ff-91e9-4a99-beae-89c81fe8b551-catalog-content\") pod \"community-operators-pb4xj\" (UID: \"9ed317ff-91e9-4a99-beae-89c81fe8b551\") " pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:06 crc kubenswrapper[5024]: I1128 17:24:06.156384 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed317ff-91e9-4a99-beae-89c81fe8b551-utilities\") pod \"community-operators-pb4xj\" (UID: \"9ed317ff-91e9-4a99-beae-89c81fe8b551\") " pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:06 crc kubenswrapper[5024]: I1128 17:24:06.156595 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed317ff-91e9-4a99-beae-89c81fe8b551-catalog-content\") pod \"community-operators-pb4xj\" (UID: \"9ed317ff-91e9-4a99-beae-89c81fe8b551\") " pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:06 crc kubenswrapper[5024]: I1128 17:24:06.190995 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvf92\" (UniqueName: \"kubernetes.io/projected/9ed317ff-91e9-4a99-beae-89c81fe8b551-kube-api-access-xvf92\") pod \"community-operators-pb4xj\" (UID: \"9ed317ff-91e9-4a99-beae-89c81fe8b551\") " pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:06 crc kubenswrapper[5024]: I1128 17:24:06.249094 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:07 crc kubenswrapper[5024]: W1128 17:24:07.081515 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ed317ff_91e9_4a99_beae_89c81fe8b551.slice/crio-ec1da5b582eca04ca4b6e417f0c324ebef7510776be0dc48ba999dfcf0702485 WatchSource:0}: Error finding container ec1da5b582eca04ca4b6e417f0c324ebef7510776be0dc48ba999dfcf0702485: Status 404 returned error can't find the container with id ec1da5b582eca04ca4b6e417f0c324ebef7510776be0dc48ba999dfcf0702485 Nov 28 17:24:07 crc kubenswrapper[5024]: I1128 17:24:07.087008 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pb4xj"] Nov 28 17:24:07 crc kubenswrapper[5024]: I1128 17:24:07.141791 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:24:07 crc kubenswrapper[5024]: I1128 17:24:07.245645 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5d56577dc4-kw6js"] Nov 28 17:24:07 crc kubenswrapper[5024]: I1128 17:24:07.246102 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" podUID="fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7" containerName="heat-cfnapi" containerID="cri-o://fe919107fb6376a36a8228cf633887e30d4294a4447c75ce52fd69ffed219d9a" gracePeriod=60 Nov 28 17:24:07 crc kubenswrapper[5024]: I1128 17:24:07.253176 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" podUID="fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.213:8000/healthcheck\": EOF" Nov 28 17:24:07 crc kubenswrapper[5024]: I1128 17:24:07.254569 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" podUID="fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.213:8000/healthcheck\": EOF" Nov 28 17:24:07 crc kubenswrapper[5024]: I1128 17:24:07.670873 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:24:07 crc kubenswrapper[5024]: I1128 17:24:07.672446 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:24:07 crc kubenswrapper[5024]: I1128 17:24:07.724451 5024 generic.go:334] "Generic (PLEG): container finished" podID="9ed317ff-91e9-4a99-beae-89c81fe8b551" containerID="8909acacf4830391f528193f0eae9e168a4dfc7e61cc5ab616c461e5d8d49680" exitCode=0 Nov 28 17:24:07 crc kubenswrapper[5024]: I1128 17:24:07.724809 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pb4xj" event={"ID":"9ed317ff-91e9-4a99-beae-89c81fe8b551","Type":"ContainerDied","Data":"8909acacf4830391f528193f0eae9e168a4dfc7e61cc5ab616c461e5d8d49680"} Nov 28 17:24:07 crc kubenswrapper[5024]: I1128 17:24:07.724951 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pb4xj" event={"ID":"9ed317ff-91e9-4a99-beae-89c81fe8b551","Type":"ContainerStarted","Data":"ec1da5b582eca04ca4b6e417f0c324ebef7510776be0dc48ba999dfcf0702485"} Nov 28 17:24:07 crc kubenswrapper[5024]: I1128 17:24:07.963589 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:24:08 crc kubenswrapper[5024]: I1128 17:24:08.032590 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-d5797c764-zffzc"] Nov 28 17:24:08 crc kubenswrapper[5024]: I1128 17:24:08.032851 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-d5797c764-zffzc" podUID="aca9dafd-8069-42d9-b644-12fc96509330" containerName="heat-engine" containerID="cri-o://c64d3ed6fe34d3578fb2e3b55010dea4e69b48fd200e96d8a82c7df82889991c" gracePeriod=60 Nov 28 17:24:08 crc kubenswrapper[5024]: I1128 17:24:08.716439 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:24:08 crc kubenswrapper[5024]: I1128 17:24:08.879859 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7d7fb5b5d9-qtddd"] Nov 28 17:24:08 crc kubenswrapper[5024]: I1128 17:24:08.880360 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7d7fb5b5d9-qtddd" podUID="6351b489-49ff-47a9-bb4f-26632893416c" containerName="heat-api" containerID="cri-o://50d9779890245f813a6757f8ffe624036b5368159fbb86b3ace9f1b231fe854a" gracePeriod=60 Nov 28 17:24:09 crc kubenswrapper[5024]: E1128 17:24:09.408570 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c64d3ed6fe34d3578fb2e3b55010dea4e69b48fd200e96d8a82c7df82889991c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 17:24:09 crc kubenswrapper[5024]: E1128 17:24:09.421917 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c64d3ed6fe34d3578fb2e3b55010dea4e69b48fd200e96d8a82c7df82889991c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 17:24:09 crc kubenswrapper[5024]: E1128 17:24:09.423625 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c64d3ed6fe34d3578fb2e3b55010dea4e69b48fd200e96d8a82c7df82889991c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 17:24:09 crc kubenswrapper[5024]: E1128 17:24:09.423669 5024 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-d5797c764-zffzc" podUID="aca9dafd-8069-42d9-b644-12fc96509330" containerName="heat-engine" Nov 28 17:24:09 crc kubenswrapper[5024]: I1128 17:24:09.753785 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pb4xj" event={"ID":"9ed317ff-91e9-4a99-beae-89c81fe8b551","Type":"ContainerStarted","Data":"7c1c97c183e7c22fcd41f5fd101a618e5021e260fc53a2d4235ba0ec3bff208d"} Nov 28 17:24:10 crc kubenswrapper[5024]: I1128 17:24:10.767743 5024 generic.go:334] "Generic (PLEG): container finished" podID="9ed317ff-91e9-4a99-beae-89c81fe8b551" containerID="7c1c97c183e7c22fcd41f5fd101a618e5021e260fc53a2d4235ba0ec3bff208d" exitCode=0 Nov 28 17:24:10 crc kubenswrapper[5024]: I1128 17:24:10.767794 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pb4xj" event={"ID":"9ed317ff-91e9-4a99-beae-89c81fe8b551","Type":"ContainerDied","Data":"7c1c97c183e7c22fcd41f5fd101a618e5021e260fc53a2d4235ba0ec3bff208d"} Nov 28 17:24:12 crc kubenswrapper[5024]: I1128 17:24:12.148520 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7d7fb5b5d9-qtddd" podUID="6351b489-49ff-47a9-bb4f-26632893416c" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.214:8004/healthcheck\": read tcp 10.217.0.2:34730->10.217.0.214:8004: read: connection reset by peer" Nov 28 17:24:12 crc kubenswrapper[5024]: I1128 17:24:12.827399 5024 generic.go:334] "Generic (PLEG): container finished" podID="6351b489-49ff-47a9-bb4f-26632893416c" containerID="50d9779890245f813a6757f8ffe624036b5368159fbb86b3ace9f1b231fe854a" exitCode=0 Nov 28 17:24:12 crc kubenswrapper[5024]: I1128 17:24:12.827708 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d7fb5b5d9-qtddd" event={"ID":"6351b489-49ff-47a9-bb4f-26632893416c","Type":"ContainerDied","Data":"50d9779890245f813a6757f8ffe624036b5368159fbb86b3ace9f1b231fe854a"} Nov 28 17:24:12 crc kubenswrapper[5024]: I1128 17:24:12.833464 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pb4xj" event={"ID":"9ed317ff-91e9-4a99-beae-89c81fe8b551","Type":"ContainerStarted","Data":"ec6df9837c6e7f2b2d788509dfbd2af0c81abd576c20d721e2fcadbd6503d88e"} Nov 28 17:24:12 crc kubenswrapper[5024]: I1128 17:24:12.866243 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pb4xj" podStartSLOduration=3.807415082 podStartE2EDuration="7.866215186s" podCreationTimestamp="2025-11-28 17:24:05 +0000 UTC" firstStartedPulling="2025-11-28 17:24:07.774759691 +0000 UTC m=+1549.823680596" lastFinishedPulling="2025-11-28 17:24:11.833559785 +0000 UTC m=+1553.882480700" observedRunningTime="2025-11-28 17:24:12.858920306 +0000 UTC m=+1554.907841221" watchObservedRunningTime="2025-11-28 17:24:12.866215186 +0000 UTC m=+1554.915136091" Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.288432 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.385528 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" podUID="fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.213:8000/healthcheck\": read tcp 10.217.0.2:48896->10.217.0.213:8000: read: connection reset by peer" Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.398054 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-config-data\") pod \"6351b489-49ff-47a9-bb4f-26632893416c\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.398158 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8nbw\" (UniqueName: \"kubernetes.io/projected/6351b489-49ff-47a9-bb4f-26632893416c-kube-api-access-m8nbw\") pod \"6351b489-49ff-47a9-bb4f-26632893416c\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.398257 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-config-data-custom\") pod \"6351b489-49ff-47a9-bb4f-26632893416c\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.398461 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-combined-ca-bundle\") pod \"6351b489-49ff-47a9-bb4f-26632893416c\" (UID: \"6351b489-49ff-47a9-bb4f-26632893416c\") " Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.417632 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6351b489-49ff-47a9-bb4f-26632893416c-kube-api-access-m8nbw" (OuterVolumeSpecName: "kube-api-access-m8nbw") pod "6351b489-49ff-47a9-bb4f-26632893416c" (UID: "6351b489-49ff-47a9-bb4f-26632893416c"). InnerVolumeSpecName "kube-api-access-m8nbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.433190 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6351b489-49ff-47a9-bb4f-26632893416c" (UID: "6351b489-49ff-47a9-bb4f-26632893416c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.444209 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6351b489-49ff-47a9-bb4f-26632893416c" (UID: "6351b489-49ff-47a9-bb4f-26632893416c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.502120 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8nbw\" (UniqueName: \"kubernetes.io/projected/6351b489-49ff-47a9-bb4f-26632893416c-kube-api-access-m8nbw\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.502476 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.502515 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.502978 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-config-data" (OuterVolumeSpecName: "config-data") pod "6351b489-49ff-47a9-bb4f-26632893416c" (UID: "6351b489-49ff-47a9-bb4f-26632893416c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.607278 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6351b489-49ff-47a9-bb4f-26632893416c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.892732 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d7fb5b5d9-qtddd" event={"ID":"6351b489-49ff-47a9-bb4f-26632893416c","Type":"ContainerDied","Data":"bdb45099d84b938068bc051a2f06f8ef6592ed2c3353a61ffe59c82472e2ffa9"} Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.892806 5024 scope.go:117] "RemoveContainer" containerID="50d9779890245f813a6757f8ffe624036b5368159fbb86b3ace9f1b231fe854a" Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.893041 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d7fb5b5d9-qtddd" Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.899700 5024 generic.go:334] "Generic (PLEG): container finished" podID="fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7" containerID="fe919107fb6376a36a8228cf633887e30d4294a4447c75ce52fd69ffed219d9a" exitCode=0 Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.900925 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" event={"ID":"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7","Type":"ContainerDied","Data":"fe919107fb6376a36a8228cf633887e30d4294a4447c75ce52fd69ffed219d9a"} Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.963443 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7d7fb5b5d9-qtddd"] Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.984301 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.987116 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7d7fb5b5d9-qtddd"] Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.998574 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-config-data-custom\") pod \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.998675 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbhcp\" (UniqueName: \"kubernetes.io/projected/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-kube-api-access-zbhcp\") pod \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.998830 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-combined-ca-bundle\") pod \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " Nov 28 17:24:13 crc kubenswrapper[5024]: I1128 17:24:13.998923 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-config-data\") pod \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\" (UID: \"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7\") " Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.003654 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-kube-api-access-zbhcp" (OuterVolumeSpecName: "kube-api-access-zbhcp") pod "fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7" (UID: "fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7"). InnerVolumeSpecName "kube-api-access-zbhcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.004973 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7" (UID: "fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.062329 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7" (UID: "fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.099176 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-config-data" (OuterVolumeSpecName: "config-data") pod "fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7" (UID: "fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.101254 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.101287 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbhcp\" (UniqueName: \"kubernetes.io/projected/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-kube-api-access-zbhcp\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.101300 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.101309 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.188555 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wbkl5" podUID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" containerName="registry-server" probeResult="failure" output=< Nov 28 17:24:14 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 17:24:14 crc kubenswrapper[5024]: > Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.511496 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6351b489-49ff-47a9-bb4f-26632893416c" path="/var/lib/kubelet/pods/6351b489-49ff-47a9-bb4f-26632893416c/volumes" Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.916394 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" event={"ID":"fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7","Type":"ContainerDied","Data":"9366ff0ed727d2fde23868a5c7782bd9a11a717a40aabadb1103941deb18aa6e"} Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.916458 5024 scope.go:117] "RemoveContainer" containerID="fe919107fb6376a36a8228cf633887e30d4294a4447c75ce52fd69ffed219d9a" Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.916494 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5d56577dc4-kw6js" Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.954295 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5d56577dc4-kw6js"] Nov 28 17:24:14 crc kubenswrapper[5024]: I1128 17:24:14.976571 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5d56577dc4-kw6js"] Nov 28 17:24:15 crc kubenswrapper[5024]: I1128 17:24:15.911895 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:24:15 crc kubenswrapper[5024]: I1128 17:24:15.933728 5024 generic.go:334] "Generic (PLEG): container finished" podID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerID="e460a87d0002d11afbd87ca2e921933a5c8b762068e7a02ad2e284c3f5aaaa4d" exitCode=0 Nov 28 17:24:15 crc kubenswrapper[5024]: I1128 17:24:15.933771 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f0170ba-8387-4ac8-ab60-2253e69be992","Type":"ContainerDied","Data":"e460a87d0002d11afbd87ca2e921933a5c8b762068e7a02ad2e284c3f5aaaa4d"} Nov 28 17:24:15 crc kubenswrapper[5024]: I1128 17:24:15.933798 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f0170ba-8387-4ac8-ab60-2253e69be992","Type":"ContainerDied","Data":"1c4570dbb6a1218dd00791cc49a647918bd238723251282f9fb87c6d17a92b71"} Nov 28 17:24:15 crc kubenswrapper[5024]: I1128 17:24:15.933814 5024 scope.go:117] "RemoveContainer" containerID="dc80e217916999fb51186de0e0223ba43dcf51e62c054377915a39bc69df6415" Nov 28 17:24:15 crc kubenswrapper[5024]: I1128 17:24:15.933814 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.004182 5024 scope.go:117] "RemoveContainer" containerID="794b5b1e57e10771890a659302e46f7fffb97090fa861905344ecb02361fb4ea" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.023209 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-config-data\") pod \"6f0170ba-8387-4ac8-ab60-2253e69be992\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.023256 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-sg-core-conf-yaml\") pod \"6f0170ba-8387-4ac8-ab60-2253e69be992\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.023331 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f0170ba-8387-4ac8-ab60-2253e69be992-log-httpd\") pod \"6f0170ba-8387-4ac8-ab60-2253e69be992\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.023368 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-combined-ca-bundle\") pod \"6f0170ba-8387-4ac8-ab60-2253e69be992\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.023403 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw42b\" (UniqueName: \"kubernetes.io/projected/6f0170ba-8387-4ac8-ab60-2253e69be992-kube-api-access-bw42b\") pod \"6f0170ba-8387-4ac8-ab60-2253e69be992\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.023765 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f0170ba-8387-4ac8-ab60-2253e69be992-run-httpd\") pod \"6f0170ba-8387-4ac8-ab60-2253e69be992\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.023801 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-scripts\") pod \"6f0170ba-8387-4ac8-ab60-2253e69be992\" (UID: \"6f0170ba-8387-4ac8-ab60-2253e69be992\") " Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.024554 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f0170ba-8387-4ac8-ab60-2253e69be992-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6f0170ba-8387-4ac8-ab60-2253e69be992" (UID: "6f0170ba-8387-4ac8-ab60-2253e69be992"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.027848 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f0170ba-8387-4ac8-ab60-2253e69be992-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6f0170ba-8387-4ac8-ab60-2253e69be992" (UID: "6f0170ba-8387-4ac8-ab60-2253e69be992"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.041254 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f0170ba-8387-4ac8-ab60-2253e69be992-kube-api-access-bw42b" (OuterVolumeSpecName: "kube-api-access-bw42b") pod "6f0170ba-8387-4ac8-ab60-2253e69be992" (UID: "6f0170ba-8387-4ac8-ab60-2253e69be992"). InnerVolumeSpecName "kube-api-access-bw42b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.059308 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-scripts" (OuterVolumeSpecName: "scripts") pod "6f0170ba-8387-4ac8-ab60-2253e69be992" (UID: "6f0170ba-8387-4ac8-ab60-2253e69be992"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.066228 5024 scope.go:117] "RemoveContainer" containerID="8e700896823d75942f04961d215d655851c0de16c8fec02a6cb7dda32816772a" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.104613 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6f0170ba-8387-4ac8-ab60-2253e69be992" (UID: "6f0170ba-8387-4ac8-ab60-2253e69be992"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.127828 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.127857 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.127870 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f0170ba-8387-4ac8-ab60-2253e69be992-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.127879 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw42b\" (UniqueName: \"kubernetes.io/projected/6f0170ba-8387-4ac8-ab60-2253e69be992-kube-api-access-bw42b\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.127887 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f0170ba-8387-4ac8-ab60-2253e69be992-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.182242 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f0170ba-8387-4ac8-ab60-2253e69be992" (UID: "6f0170ba-8387-4ac8-ab60-2253e69be992"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.217740 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-config-data" (OuterVolumeSpecName: "config-data") pod "6f0170ba-8387-4ac8-ab60-2253e69be992" (UID: "6f0170ba-8387-4ac8-ab60-2253e69be992"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.229747 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.229782 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0170ba-8387-4ac8-ab60-2253e69be992-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.249606 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.250806 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.304641 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.314376 5024 scope.go:117] "RemoveContainer" containerID="e460a87d0002d11afbd87ca2e921933a5c8b762068e7a02ad2e284c3f5aaaa4d" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.326541 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.359990 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.374956 5024 scope.go:117] "RemoveContainer" containerID="dc80e217916999fb51186de0e0223ba43dcf51e62c054377915a39bc69df6415" Nov 28 17:24:16 crc kubenswrapper[5024]: E1128 17:24:16.375575 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc80e217916999fb51186de0e0223ba43dcf51e62c054377915a39bc69df6415\": container with ID starting with dc80e217916999fb51186de0e0223ba43dcf51e62c054377915a39bc69df6415 not found: ID does not exist" containerID="dc80e217916999fb51186de0e0223ba43dcf51e62c054377915a39bc69df6415" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.375636 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc80e217916999fb51186de0e0223ba43dcf51e62c054377915a39bc69df6415"} err="failed to get container status \"dc80e217916999fb51186de0e0223ba43dcf51e62c054377915a39bc69df6415\": rpc error: code = NotFound desc = could not find container \"dc80e217916999fb51186de0e0223ba43dcf51e62c054377915a39bc69df6415\": container with ID starting with dc80e217916999fb51186de0e0223ba43dcf51e62c054377915a39bc69df6415 not found: ID does not exist" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.375665 5024 scope.go:117] "RemoveContainer" containerID="794b5b1e57e10771890a659302e46f7fffb97090fa861905344ecb02361fb4ea" Nov 28 17:24:16 crc kubenswrapper[5024]: E1128 17:24:16.376056 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"794b5b1e57e10771890a659302e46f7fffb97090fa861905344ecb02361fb4ea\": container with ID starting with 794b5b1e57e10771890a659302e46f7fffb97090fa861905344ecb02361fb4ea not found: ID does not exist" containerID="794b5b1e57e10771890a659302e46f7fffb97090fa861905344ecb02361fb4ea" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.376082 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"794b5b1e57e10771890a659302e46f7fffb97090fa861905344ecb02361fb4ea"} err="failed to get container status \"794b5b1e57e10771890a659302e46f7fffb97090fa861905344ecb02361fb4ea\": rpc error: code = NotFound desc = could not find container \"794b5b1e57e10771890a659302e46f7fffb97090fa861905344ecb02361fb4ea\": container with ID starting with 794b5b1e57e10771890a659302e46f7fffb97090fa861905344ecb02361fb4ea not found: ID does not exist" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.376097 5024 scope.go:117] "RemoveContainer" containerID="8e700896823d75942f04961d215d655851c0de16c8fec02a6cb7dda32816772a" Nov 28 17:24:16 crc kubenswrapper[5024]: E1128 17:24:16.376482 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e700896823d75942f04961d215d655851c0de16c8fec02a6cb7dda32816772a\": container with ID starting with 8e700896823d75942f04961d215d655851c0de16c8fec02a6cb7dda32816772a not found: ID does not exist" containerID="8e700896823d75942f04961d215d655851c0de16c8fec02a6cb7dda32816772a" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.376505 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e700896823d75942f04961d215d655851c0de16c8fec02a6cb7dda32816772a"} err="failed to get container status \"8e700896823d75942f04961d215d655851c0de16c8fec02a6cb7dda32816772a\": rpc error: code = NotFound desc = could not find container \"8e700896823d75942f04961d215d655851c0de16c8fec02a6cb7dda32816772a\": container with ID starting with 8e700896823d75942f04961d215d655851c0de16c8fec02a6cb7dda32816772a not found: ID does not exist" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.376522 5024 scope.go:117] "RemoveContainer" containerID="e460a87d0002d11afbd87ca2e921933a5c8b762068e7a02ad2e284c3f5aaaa4d" Nov 28 17:24:16 crc kubenswrapper[5024]: E1128 17:24:16.376768 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e460a87d0002d11afbd87ca2e921933a5c8b762068e7a02ad2e284c3f5aaaa4d\": container with ID starting with e460a87d0002d11afbd87ca2e921933a5c8b762068e7a02ad2e284c3f5aaaa4d not found: ID does not exist" containerID="e460a87d0002d11afbd87ca2e921933a5c8b762068e7a02ad2e284c3f5aaaa4d" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.376791 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e460a87d0002d11afbd87ca2e921933a5c8b762068e7a02ad2e284c3f5aaaa4d"} err="failed to get container status \"e460a87d0002d11afbd87ca2e921933a5c8b762068e7a02ad2e284c3f5aaaa4d\": rpc error: code = NotFound desc = could not find container \"e460a87d0002d11afbd87ca2e921933a5c8b762068e7a02ad2e284c3f5aaaa4d\": container with ID starting with e460a87d0002d11afbd87ca2e921933a5c8b762068e7a02ad2e284c3f5aaaa4d not found: ID does not exist" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.487109 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:24:16 crc kubenswrapper[5024]: E1128 17:24:16.487906 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="sg-core" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.487930 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="sg-core" Nov 28 17:24:16 crc kubenswrapper[5024]: E1128 17:24:16.487975 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6351b489-49ff-47a9-bb4f-26632893416c" containerName="heat-api" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.487981 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6351b489-49ff-47a9-bb4f-26632893416c" containerName="heat-api" Nov 28 17:24:16 crc kubenswrapper[5024]: E1128 17:24:16.487994 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="ceilometer-notification-agent" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.488000 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="ceilometer-notification-agent" Nov 28 17:24:16 crc kubenswrapper[5024]: E1128 17:24:16.488009 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="proxy-httpd" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.488031 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="proxy-httpd" Nov 28 17:24:16 crc kubenswrapper[5024]: E1128 17:24:16.488065 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7" containerName="heat-cfnapi" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.488072 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7" containerName="heat-cfnapi" Nov 28 17:24:16 crc kubenswrapper[5024]: E1128 17:24:16.488087 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="ceilometer-central-agent" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.488092 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="ceilometer-central-agent" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.488361 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7" containerName="heat-cfnapi" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.488386 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="ceilometer-central-agent" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.488401 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6351b489-49ff-47a9-bb4f-26632893416c" containerName="heat-api" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.488419 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="ceilometer-notification-agent" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.488431 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="sg-core" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.488441 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" containerName="proxy-httpd" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.490814 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.495461 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.495857 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.517732 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f0170ba-8387-4ac8-ab60-2253e69be992" path="/var/lib/kubelet/pods/6f0170ba-8387-4ac8-ab60-2253e69be992/volumes" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.526196 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7" path="/var/lib/kubelet/pods/fa3cba9b-0c1f-4d10-87cc-7159d1e1b0c7/volumes" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.526967 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.681607 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db9a796-d716-420d-9c0f-5ec9e4972585-run-httpd\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.681713 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.681738 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-scripts\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.681852 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-config-data\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.681934 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpwh2\" (UniqueName: \"kubernetes.io/projected/5db9a796-d716-420d-9c0f-5ec9e4972585-kube-api-access-qpwh2\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.681987 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db9a796-d716-420d-9c0f-5ec9e4972585-log-httpd\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.682077 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.783713 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.784090 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-scripts\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.784232 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-config-data\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.784310 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpwh2\" (UniqueName: \"kubernetes.io/projected/5db9a796-d716-420d-9c0f-5ec9e4972585-kube-api-access-qpwh2\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.784357 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db9a796-d716-420d-9c0f-5ec9e4972585-log-httpd\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.784433 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.784559 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db9a796-d716-420d-9c0f-5ec9e4972585-run-httpd\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.788395 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db9a796-d716-420d-9c0f-5ec9e4972585-run-httpd\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.792214 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.796884 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db9a796-d716-420d-9c0f-5ec9e4972585-log-httpd\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.802451 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-config-data\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.814642 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.814673 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-scripts\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.827490 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpwh2\" (UniqueName: \"kubernetes.io/projected/5db9a796-d716-420d-9c0f-5ec9e4972585-kube-api-access-qpwh2\") pod \"ceilometer-0\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.838250 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.947660 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qqm8c"] Nov 28 17:24:16 crc kubenswrapper[5024]: I1128 17:24:16.950481 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qqm8c" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.066772 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qqm8c"] Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.119108 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vpc6d"] Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.121589 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vpc6d" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.177251 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.195742 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vpc6d"] Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.231867 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg6cn\" (UniqueName: \"kubernetes.io/projected/92c51dd3-21b1-4fdf-a076-64dd49fa10f9-kube-api-access-tg6cn\") pod \"nova-api-db-create-qqm8c\" (UID: \"92c51dd3-21b1-4fdf-a076-64dd49fa10f9\") " pod="openstack/nova-api-db-create-qqm8c" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.232384 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c51dd3-21b1-4fdf-a076-64dd49fa10f9-operator-scripts\") pod \"nova-api-db-create-qqm8c\" (UID: \"92c51dd3-21b1-4fdf-a076-64dd49fa10f9\") " pod="openstack/nova-api-db-create-qqm8c" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.316812 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1d37-account-create-update-ps7zm"] Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.329922 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1d37-account-create-update-ps7zm" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.334684 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.336315 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c51dd3-21b1-4fdf-a076-64dd49fa10f9-operator-scripts\") pod \"nova-api-db-create-qqm8c\" (UID: \"92c51dd3-21b1-4fdf-a076-64dd49fa10f9\") " pod="openstack/nova-api-db-create-qqm8c" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.336413 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg6cn\" (UniqueName: \"kubernetes.io/projected/92c51dd3-21b1-4fdf-a076-64dd49fa10f9-kube-api-access-tg6cn\") pod \"nova-api-db-create-qqm8c\" (UID: \"92c51dd3-21b1-4fdf-a076-64dd49fa10f9\") " pod="openstack/nova-api-db-create-qqm8c" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.337418 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c51dd3-21b1-4fdf-a076-64dd49fa10f9-operator-scripts\") pod \"nova-api-db-create-qqm8c\" (UID: \"92c51dd3-21b1-4fdf-a076-64dd49fa10f9\") " pod="openstack/nova-api-db-create-qqm8c" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.357064 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg6cn\" (UniqueName: \"kubernetes.io/projected/92c51dd3-21b1-4fdf-a076-64dd49fa10f9-kube-api-access-tg6cn\") pod \"nova-api-db-create-qqm8c\" (UID: \"92c51dd3-21b1-4fdf-a076-64dd49fa10f9\") " pod="openstack/nova-api-db-create-qqm8c" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.371082 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6n8zs"] Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.372912 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6n8zs" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.416708 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1d37-account-create-update-ps7zm"] Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.442616 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggd8l\" (UniqueName: \"kubernetes.io/projected/c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4-kube-api-access-ggd8l\") pod \"nova-cell1-db-create-6n8zs\" (UID: \"c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4\") " pod="openstack/nova-cell1-db-create-6n8zs" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.442881 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f17bfb5-bf03-441a-ac54-d1e842049a41-operator-scripts\") pod \"nova-api-1d37-account-create-update-ps7zm\" (UID: \"8f17bfb5-bf03-441a-ac54-d1e842049a41\") " pod="openstack/nova-api-1d37-account-create-update-ps7zm" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.442991 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4-operator-scripts\") pod \"nova-cell1-db-create-6n8zs\" (UID: \"c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4\") " pod="openstack/nova-cell1-db-create-6n8zs" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.443147 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfdcc\" (UniqueName: \"kubernetes.io/projected/8f17bfb5-bf03-441a-ac54-d1e842049a41-kube-api-access-vfdcc\") pod \"nova-api-1d37-account-create-update-ps7zm\" (UID: \"8f17bfb5-bf03-441a-ac54-d1e842049a41\") " pod="openstack/nova-api-1d37-account-create-update-ps7zm" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.443263 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c497f27e-01b1-457c-bcf1-dc7652e9f771-operator-scripts\") pod \"nova-cell0-db-create-vpc6d\" (UID: \"c497f27e-01b1-457c-bcf1-dc7652e9f771\") " pod="openstack/nova-cell0-db-create-vpc6d" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.443433 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc2cz\" (UniqueName: \"kubernetes.io/projected/c497f27e-01b1-457c-bcf1-dc7652e9f771-kube-api-access-qc2cz\") pod \"nova-cell0-db-create-vpc6d\" (UID: \"c497f27e-01b1-457c-bcf1-dc7652e9f771\") " pod="openstack/nova-cell0-db-create-vpc6d" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.452373 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6n8zs"] Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.478716 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qqm8c" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.505904 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pb4xj"] Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.559275 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfdcc\" (UniqueName: \"kubernetes.io/projected/8f17bfb5-bf03-441a-ac54-d1e842049a41-kube-api-access-vfdcc\") pod \"nova-api-1d37-account-create-update-ps7zm\" (UID: \"8f17bfb5-bf03-441a-ac54-d1e842049a41\") " pod="openstack/nova-api-1d37-account-create-update-ps7zm" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.769109 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c497f27e-01b1-457c-bcf1-dc7652e9f771-operator-scripts\") pod \"nova-cell0-db-create-vpc6d\" (UID: \"c497f27e-01b1-457c-bcf1-dc7652e9f771\") " pod="openstack/nova-cell0-db-create-vpc6d" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.769531 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc2cz\" (UniqueName: \"kubernetes.io/projected/c497f27e-01b1-457c-bcf1-dc7652e9f771-kube-api-access-qc2cz\") pod \"nova-cell0-db-create-vpc6d\" (UID: \"c497f27e-01b1-457c-bcf1-dc7652e9f771\") " pod="openstack/nova-cell0-db-create-vpc6d" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.769855 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggd8l\" (UniqueName: \"kubernetes.io/projected/c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4-kube-api-access-ggd8l\") pod \"nova-cell1-db-create-6n8zs\" (UID: \"c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4\") " pod="openstack/nova-cell1-db-create-6n8zs" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.769951 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f17bfb5-bf03-441a-ac54-d1e842049a41-operator-scripts\") pod \"nova-api-1d37-account-create-update-ps7zm\" (UID: \"8f17bfb5-bf03-441a-ac54-d1e842049a41\") " pod="openstack/nova-api-1d37-account-create-update-ps7zm" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.770181 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4-operator-scripts\") pod \"nova-cell1-db-create-6n8zs\" (UID: \"c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4\") " pod="openstack/nova-cell1-db-create-6n8zs" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.654267 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfdcc\" (UniqueName: \"kubernetes.io/projected/8f17bfb5-bf03-441a-ac54-d1e842049a41-kube-api-access-vfdcc\") pod \"nova-api-1d37-account-create-update-ps7zm\" (UID: \"8f17bfb5-bf03-441a-ac54-d1e842049a41\") " pod="openstack/nova-api-1d37-account-create-update-ps7zm" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.774493 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c497f27e-01b1-457c-bcf1-dc7652e9f771-operator-scripts\") pod \"nova-cell0-db-create-vpc6d\" (UID: \"c497f27e-01b1-457c-bcf1-dc7652e9f771\") " pod="openstack/nova-cell0-db-create-vpc6d" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.780844 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4-operator-scripts\") pod \"nova-cell1-db-create-6n8zs\" (UID: \"c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4\") " pod="openstack/nova-cell1-db-create-6n8zs" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.781581 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-c5b8-account-create-update-p7vd8"] Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.784571 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f17bfb5-bf03-441a-ac54-d1e842049a41-operator-scripts\") pod \"nova-api-1d37-account-create-update-ps7zm\" (UID: \"8f17bfb5-bf03-441a-ac54-d1e842049a41\") " pod="openstack/nova-api-1d37-account-create-update-ps7zm" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.796042 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.827944 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.829859 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggd8l\" (UniqueName: \"kubernetes.io/projected/c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4-kube-api-access-ggd8l\") pod \"nova-cell1-db-create-6n8zs\" (UID: \"c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4\") " pod="openstack/nova-cell1-db-create-6n8zs" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.861130 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c5b8-account-create-update-p7vd8"] Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.864653 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc2cz\" (UniqueName: \"kubernetes.io/projected/c497f27e-01b1-457c-bcf1-dc7652e9f771-kube-api-access-qc2cz\") pod \"nova-cell0-db-create-vpc6d\" (UID: \"c497f27e-01b1-457c-bcf1-dc7652e9f771\") " pod="openstack/nova-cell0-db-create-vpc6d" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.872139 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdj7w\" (UniqueName: \"kubernetes.io/projected/3775a71c-b9bd-4550-b613-113d5eb727d2-kube-api-access-vdj7w\") pod \"nova-cell0-c5b8-account-create-update-p7vd8\" (UID: \"3775a71c-b9bd-4550-b613-113d5eb727d2\") " pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.872195 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3775a71c-b9bd-4550-b613-113d5eb727d2-operator-scripts\") pod \"nova-cell0-c5b8-account-create-update-p7vd8\" (UID: \"3775a71c-b9bd-4550-b613-113d5eb727d2\") " pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.888063 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-2880-account-create-update-nzw62"] Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.889726 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2880-account-create-update-nzw62" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.892505 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.924301 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2880-account-create-update-nzw62"] Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.975691 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdj7w\" (UniqueName: \"kubernetes.io/projected/3775a71c-b9bd-4550-b613-113d5eb727d2-kube-api-access-vdj7w\") pod \"nova-cell0-c5b8-account-create-update-p7vd8\" (UID: \"3775a71c-b9bd-4550-b613-113d5eb727d2\") " pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.976699 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3775a71c-b9bd-4550-b613-113d5eb727d2-operator-scripts\") pod \"nova-cell0-c5b8-account-create-update-p7vd8\" (UID: \"3775a71c-b9bd-4550-b613-113d5eb727d2\") " pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.976820 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4jq4\" (UniqueName: \"kubernetes.io/projected/769e3a29-37e1-4aa5-ae9a-c82e3efe8892-kube-api-access-b4jq4\") pod \"nova-cell1-2880-account-create-update-nzw62\" (UID: \"769e3a29-37e1-4aa5-ae9a-c82e3efe8892\") " pod="openstack/nova-cell1-2880-account-create-update-nzw62" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.976993 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/769e3a29-37e1-4aa5-ae9a-c82e3efe8892-operator-scripts\") pod \"nova-cell1-2880-account-create-update-nzw62\" (UID: \"769e3a29-37e1-4aa5-ae9a-c82e3efe8892\") " pod="openstack/nova-cell1-2880-account-create-update-nzw62" Nov 28 17:24:17 crc kubenswrapper[5024]: I1128 17:24:17.978954 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3775a71c-b9bd-4550-b613-113d5eb727d2-operator-scripts\") pod \"nova-cell0-c5b8-account-create-update-p7vd8\" (UID: \"3775a71c-b9bd-4550-b613-113d5eb727d2\") " pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.013556 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdj7w\" (UniqueName: \"kubernetes.io/projected/3775a71c-b9bd-4550-b613-113d5eb727d2-kube-api-access-vdj7w\") pod \"nova-cell0-c5b8-account-create-update-p7vd8\" (UID: \"3775a71c-b9bd-4550-b613-113d5eb727d2\") " pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.062349 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1d37-account-create-update-ps7zm" Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.071921 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6n8zs" Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.079880 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/769e3a29-37e1-4aa5-ae9a-c82e3efe8892-operator-scripts\") pod \"nova-cell1-2880-account-create-update-nzw62\" (UID: \"769e3a29-37e1-4aa5-ae9a-c82e3efe8892\") " pod="openstack/nova-cell1-2880-account-create-update-nzw62" Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.080238 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4jq4\" (UniqueName: \"kubernetes.io/projected/769e3a29-37e1-4aa5-ae9a-c82e3efe8892-kube-api-access-b4jq4\") pod \"nova-cell1-2880-account-create-update-nzw62\" (UID: \"769e3a29-37e1-4aa5-ae9a-c82e3efe8892\") " pod="openstack/nova-cell1-2880-account-create-update-nzw62" Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.081067 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/769e3a29-37e1-4aa5-ae9a-c82e3efe8892-operator-scripts\") pod \"nova-cell1-2880-account-create-update-nzw62\" (UID: \"769e3a29-37e1-4aa5-ae9a-c82e3efe8892\") " pod="openstack/nova-cell1-2880-account-create-update-nzw62" Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.102348 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4jq4\" (UniqueName: \"kubernetes.io/projected/769e3a29-37e1-4aa5-ae9a-c82e3efe8892-kube-api-access-b4jq4\") pod \"nova-cell1-2880-account-create-update-nzw62\" (UID: \"769e3a29-37e1-4aa5-ae9a-c82e3efe8892\") " pod="openstack/nova-cell1-2880-account-create-update-nzw62" Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.149702 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vpc6d" Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.185749 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.258731 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2880-account-create-update-nzw62" Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.263893 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.443685 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qqm8c"] Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.744379 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6n8zs"] Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.745337 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 28 17:24:18 crc kubenswrapper[5024]: I1128 17:24:18.756552 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1d37-account-create-update-ps7zm"] Nov 28 17:24:19 crc kubenswrapper[5024]: I1128 17:24:19.166357 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6n8zs" event={"ID":"c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4","Type":"ContainerStarted","Data":"a7db1b1c05664d0657b1b8cc60298259921aac248d781b42981bf51e43125438"} Nov 28 17:24:19 crc kubenswrapper[5024]: I1128 17:24:19.173149 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db9a796-d716-420d-9c0f-5ec9e4972585","Type":"ContainerStarted","Data":"562b7ca379d7799091a777a01364599a073ff3997cd33a431e117941233e755c"} Nov 28 17:24:19 crc kubenswrapper[5024]: I1128 17:24:19.202750 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1d37-account-create-update-ps7zm" event={"ID":"8f17bfb5-bf03-441a-ac54-d1e842049a41","Type":"ContainerStarted","Data":"1c3e56227ce15c6ca47a75775f2789af20624350cb14d1dd80ac102e6d1844a0"} Nov 28 17:24:19 crc kubenswrapper[5024]: I1128 17:24:19.207466 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pb4xj" podUID="9ed317ff-91e9-4a99-beae-89c81fe8b551" containerName="registry-server" containerID="cri-o://ec6df9837c6e7f2b2d788509dfbd2af0c81abd576c20d721e2fcadbd6503d88e" gracePeriod=2 Nov 28 17:24:19 crc kubenswrapper[5024]: I1128 17:24:19.208211 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qqm8c" event={"ID":"92c51dd3-21b1-4fdf-a076-64dd49fa10f9","Type":"ContainerStarted","Data":"a2e85227963a57a756ff140263f9735c3ed88676b2ed6b90161b6addbb2f7492"} Nov 28 17:24:19 crc kubenswrapper[5024]: I1128 17:24:19.208299 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qqm8c" event={"ID":"92c51dd3-21b1-4fdf-a076-64dd49fa10f9","Type":"ContainerStarted","Data":"4879886f6a1321b30406c126ecac534355b98f44a76894779694abe9af36968a"} Nov 28 17:24:19 crc kubenswrapper[5024]: I1128 17:24:19.244516 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-qqm8c" podStartSLOduration=3.244494902 podStartE2EDuration="3.244494902s" podCreationTimestamp="2025-11-28 17:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:24:19.23257107 +0000 UTC m=+1561.281491975" watchObservedRunningTime="2025-11-28 17:24:19.244494902 +0000 UTC m=+1561.293415807" Nov 28 17:24:19 crc kubenswrapper[5024]: I1128 17:24:19.302319 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c5b8-account-create-update-p7vd8"] Nov 28 17:24:19 crc kubenswrapper[5024]: I1128 17:24:19.325835 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2880-account-create-update-nzw62"] Nov 28 17:24:19 crc kubenswrapper[5024]: I1128 17:24:19.347794 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vpc6d"] Nov 28 17:24:19 crc kubenswrapper[5024]: E1128 17:24:19.367589 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c64d3ed6fe34d3578fb2e3b55010dea4e69b48fd200e96d8a82c7df82889991c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 17:24:19 crc kubenswrapper[5024]: E1128 17:24:19.369548 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c64d3ed6fe34d3578fb2e3b55010dea4e69b48fd200e96d8a82c7df82889991c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 17:24:19 crc kubenswrapper[5024]: E1128 17:24:19.371389 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c64d3ed6fe34d3578fb2e3b55010dea4e69b48fd200e96d8a82c7df82889991c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 17:24:19 crc kubenswrapper[5024]: E1128 17:24:19.371527 5024 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-d5797c764-zffzc" podUID="aca9dafd-8069-42d9-b644-12fc96509330" containerName="heat-engine" Nov 28 17:24:19 crc kubenswrapper[5024]: I1128 17:24:19.433750 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 28 17:24:19 crc kubenswrapper[5024]: I1128 17:24:19.436567 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 28 17:24:19 crc kubenswrapper[5024]: I1128 17:24:19.736387 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.109995 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.262234 5024 generic.go:334] "Generic (PLEG): container finished" podID="c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4" containerID="a2ff8788c5a5e23f28cf3f3dd480b8ef8711fc1329213fbac3c6f6b6497bfd6b" exitCode=0 Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.262322 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6n8zs" event={"ID":"c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4","Type":"ContainerDied","Data":"a2ff8788c5a5e23f28cf3f3dd480b8ef8711fc1329213fbac3c6f6b6497bfd6b"} Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.266330 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2880-account-create-update-nzw62" event={"ID":"769e3a29-37e1-4aa5-ae9a-c82e3efe8892","Type":"ContainerStarted","Data":"942f5aa92ef3b38b197eea45f78cb718b68a82262c39402b660a603500e760ca"} Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.266379 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2880-account-create-update-nzw62" event={"ID":"769e3a29-37e1-4aa5-ae9a-c82e3efe8892","Type":"ContainerStarted","Data":"f549141f02de5d8d4ce6dc8fdb9d5053318f19cc92da237c795ba5cdbb127ee9"} Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.282781 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvf92\" (UniqueName: \"kubernetes.io/projected/9ed317ff-91e9-4a99-beae-89c81fe8b551-kube-api-access-xvf92\") pod \"9ed317ff-91e9-4a99-beae-89c81fe8b551\" (UID: \"9ed317ff-91e9-4a99-beae-89c81fe8b551\") " Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.283159 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed317ff-91e9-4a99-beae-89c81fe8b551-catalog-content\") pod \"9ed317ff-91e9-4a99-beae-89c81fe8b551\" (UID: \"9ed317ff-91e9-4a99-beae-89c81fe8b551\") " Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.283346 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed317ff-91e9-4a99-beae-89c81fe8b551-utilities\") pod \"9ed317ff-91e9-4a99-beae-89c81fe8b551\" (UID: \"9ed317ff-91e9-4a99-beae-89c81fe8b551\") " Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.295370 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db9a796-d716-420d-9c0f-5ec9e4972585","Type":"ContainerStarted","Data":"075eda0a4905110118d2d5c317ad91f7289a2bf3b0e58aa9b27513d844ae66d4"} Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.305640 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ed317ff-91e9-4a99-beae-89c81fe8b551-utilities" (OuterVolumeSpecName: "utilities") pod "9ed317ff-91e9-4a99-beae-89c81fe8b551" (UID: "9ed317ff-91e9-4a99-beae-89c81fe8b551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.329450 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed317ff-91e9-4a99-beae-89c81fe8b551-kube-api-access-xvf92" (OuterVolumeSpecName: "kube-api-access-xvf92") pod "9ed317ff-91e9-4a99-beae-89c81fe8b551" (UID: "9ed317ff-91e9-4a99-beae-89c81fe8b551"). InnerVolumeSpecName "kube-api-access-xvf92". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.352456 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vpc6d" event={"ID":"c497f27e-01b1-457c-bcf1-dc7652e9f771","Type":"ContainerStarted","Data":"150b88c3ea39666838ae99091bf8d61811d537eb97993c5c10421521a6d8f2bb"} Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.352515 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vpc6d" event={"ID":"c497f27e-01b1-457c-bcf1-dc7652e9f771","Type":"ContainerStarted","Data":"311862bfb3898d2444d408558038b82f99c0b197b1d07f01e3254c0bad07bcae"} Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.361989 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-2880-account-create-update-nzw62" podStartSLOduration=3.36196856 podStartE2EDuration="3.36196856s" podCreationTimestamp="2025-11-28 17:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:24:20.353044053 +0000 UTC m=+1562.401964968" watchObservedRunningTime="2025-11-28 17:24:20.36196856 +0000 UTC m=+1562.410889465" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.376792 5024 generic.go:334] "Generic (PLEG): container finished" podID="8f17bfb5-bf03-441a-ac54-d1e842049a41" containerID="fac2d5267c6d307b8b23bddb0cc5c653211107a572b2fd206b640be03d034e9a" exitCode=0 Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.376853 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1d37-account-create-update-ps7zm" event={"ID":"8f17bfb5-bf03-441a-ac54-d1e842049a41","Type":"ContainerDied","Data":"fac2d5267c6d307b8b23bddb0cc5c653211107a572b2fd206b640be03d034e9a"} Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.390902 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvf92\" (UniqueName: \"kubernetes.io/projected/9ed317ff-91e9-4a99-beae-89c81fe8b551-kube-api-access-xvf92\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.390943 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed317ff-91e9-4a99-beae-89c81fe8b551-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.430653 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-vpc6d" podStartSLOduration=4.430632942 podStartE2EDuration="4.430632942s" podCreationTimestamp="2025-11-28 17:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:24:20.376092555 +0000 UTC m=+1562.425013460" watchObservedRunningTime="2025-11-28 17:24:20.430632942 +0000 UTC m=+1562.479553847" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.456354 5024 generic.go:334] "Generic (PLEG): container finished" podID="9ed317ff-91e9-4a99-beae-89c81fe8b551" containerID="ec6df9837c6e7f2b2d788509dfbd2af0c81abd576c20d721e2fcadbd6503d88e" exitCode=0 Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.456471 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pb4xj" event={"ID":"9ed317ff-91e9-4a99-beae-89c81fe8b551","Type":"ContainerDied","Data":"ec6df9837c6e7f2b2d788509dfbd2af0c81abd576c20d721e2fcadbd6503d88e"} Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.456506 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pb4xj" event={"ID":"9ed317ff-91e9-4a99-beae-89c81fe8b551","Type":"ContainerDied","Data":"ec1da5b582eca04ca4b6e417f0c324ebef7510776be0dc48ba999dfcf0702485"} Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.456530 5024 scope.go:117] "RemoveContainer" containerID="ec6df9837c6e7f2b2d788509dfbd2af0c81abd576c20d721e2fcadbd6503d88e" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.456812 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pb4xj" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.519888 5024 generic.go:334] "Generic (PLEG): container finished" podID="92c51dd3-21b1-4fdf-a076-64dd49fa10f9" containerID="a2e85227963a57a756ff140263f9735c3ed88676b2ed6b90161b6addbb2f7492" exitCode=0 Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.587600 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" event={"ID":"3775a71c-b9bd-4550-b613-113d5eb727d2","Type":"ContainerStarted","Data":"3a4bdbb876528c143524ef09be5662a3cc1195aa59762b9ea5e67ff56e93b6df"} Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.587636 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" event={"ID":"3775a71c-b9bd-4550-b613-113d5eb727d2","Type":"ContainerStarted","Data":"feed579bb4d9016395f370dc017827b5e9dc2a8a59e251fbe233c395e0aa2ffd"} Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.587648 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qqm8c" event={"ID":"92c51dd3-21b1-4fdf-a076-64dd49fa10f9","Type":"ContainerDied","Data":"a2e85227963a57a756ff140263f9735c3ed88676b2ed6b90161b6addbb2f7492"} Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.644169 5024 scope.go:117] "RemoveContainer" containerID="7c1c97c183e7c22fcd41f5fd101a618e5021e260fc53a2d4235ba0ec3bff208d" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.703708 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ed317ff-91e9-4a99-beae-89c81fe8b551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ed317ff-91e9-4a99-beae-89c81fe8b551" (UID: "9ed317ff-91e9-4a99-beae-89c81fe8b551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.717186 5024 scope.go:117] "RemoveContainer" containerID="8909acacf4830391f528193f0eae9e168a4dfc7e61cc5ab616c461e5d8d49680" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.723306 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" podStartSLOduration=3.723282808 podStartE2EDuration="3.723282808s" podCreationTimestamp="2025-11-28 17:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:24:20.545483541 +0000 UTC m=+1562.594404436" watchObservedRunningTime="2025-11-28 17:24:20.723282808 +0000 UTC m=+1562.772203723" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.775213 5024 scope.go:117] "RemoveContainer" containerID="ec6df9837c6e7f2b2d788509dfbd2af0c81abd576c20d721e2fcadbd6503d88e" Nov 28 17:24:20 crc kubenswrapper[5024]: E1128 17:24:20.777390 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec6df9837c6e7f2b2d788509dfbd2af0c81abd576c20d721e2fcadbd6503d88e\": container with ID starting with ec6df9837c6e7f2b2d788509dfbd2af0c81abd576c20d721e2fcadbd6503d88e not found: ID does not exist" containerID="ec6df9837c6e7f2b2d788509dfbd2af0c81abd576c20d721e2fcadbd6503d88e" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.777683 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec6df9837c6e7f2b2d788509dfbd2af0c81abd576c20d721e2fcadbd6503d88e"} err="failed to get container status \"ec6df9837c6e7f2b2d788509dfbd2af0c81abd576c20d721e2fcadbd6503d88e\": rpc error: code = NotFound desc = could not find container \"ec6df9837c6e7f2b2d788509dfbd2af0c81abd576c20d721e2fcadbd6503d88e\": container with ID starting with ec6df9837c6e7f2b2d788509dfbd2af0c81abd576c20d721e2fcadbd6503d88e not found: ID does not exist" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.777775 5024 scope.go:117] "RemoveContainer" containerID="7c1c97c183e7c22fcd41f5fd101a618e5021e260fc53a2d4235ba0ec3bff208d" Nov 28 17:24:20 crc kubenswrapper[5024]: E1128 17:24:20.780176 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c1c97c183e7c22fcd41f5fd101a618e5021e260fc53a2d4235ba0ec3bff208d\": container with ID starting with 7c1c97c183e7c22fcd41f5fd101a618e5021e260fc53a2d4235ba0ec3bff208d not found: ID does not exist" containerID="7c1c97c183e7c22fcd41f5fd101a618e5021e260fc53a2d4235ba0ec3bff208d" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.780224 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c1c97c183e7c22fcd41f5fd101a618e5021e260fc53a2d4235ba0ec3bff208d"} err="failed to get container status \"7c1c97c183e7c22fcd41f5fd101a618e5021e260fc53a2d4235ba0ec3bff208d\": rpc error: code = NotFound desc = could not find container \"7c1c97c183e7c22fcd41f5fd101a618e5021e260fc53a2d4235ba0ec3bff208d\": container with ID starting with 7c1c97c183e7c22fcd41f5fd101a618e5021e260fc53a2d4235ba0ec3bff208d not found: ID does not exist" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.780268 5024 scope.go:117] "RemoveContainer" containerID="8909acacf4830391f528193f0eae9e168a4dfc7e61cc5ab616c461e5d8d49680" Nov 28 17:24:20 crc kubenswrapper[5024]: E1128 17:24:20.781199 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8909acacf4830391f528193f0eae9e168a4dfc7e61cc5ab616c461e5d8d49680\": container with ID starting with 8909acacf4830391f528193f0eae9e168a4dfc7e61cc5ab616c461e5d8d49680 not found: ID does not exist" containerID="8909acacf4830391f528193f0eae9e168a4dfc7e61cc5ab616c461e5d8d49680" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.781304 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8909acacf4830391f528193f0eae9e168a4dfc7e61cc5ab616c461e5d8d49680"} err="failed to get container status \"8909acacf4830391f528193f0eae9e168a4dfc7e61cc5ab616c461e5d8d49680\": rpc error: code = NotFound desc = could not find container \"8909acacf4830391f528193f0eae9e168a4dfc7e61cc5ab616c461e5d8d49680\": container with ID starting with 8909acacf4830391f528193f0eae9e168a4dfc7e61cc5ab616c461e5d8d49680 not found: ID does not exist" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.800872 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed317ff-91e9-4a99-beae-89c81fe8b551-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.842369 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pb4xj"] Nov 28 17:24:20 crc kubenswrapper[5024]: I1128 17:24:20.861200 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pb4xj"] Nov 28 17:24:20 crc kubenswrapper[5024]: W1128 17:24:20.951756 5024 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3775a71c_b9bd_4550_b613_113d5eb727d2.slice/crio-conmon-3a4bdbb876528c143524ef09be5662a3cc1195aa59762b9ea5e67ff56e93b6df.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3775a71c_b9bd_4550_b613_113d5eb727d2.slice/crio-conmon-3a4bdbb876528c143524ef09be5662a3cc1195aa59762b9ea5e67ff56e93b6df.scope: no such file or directory Nov 28 17:24:20 crc kubenswrapper[5024]: W1128 17:24:20.953443 5024 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc497f27e_01b1_457c_bcf1_dc7652e9f771.slice/crio-150b88c3ea39666838ae99091bf8d61811d537eb97993c5c10421521a6d8f2bb.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc497f27e_01b1_457c_bcf1_dc7652e9f771.slice/crio-150b88c3ea39666838ae99091bf8d61811d537eb97993c5c10421521a6d8f2bb.scope: no such file or directory Nov 28 17:24:20 crc kubenswrapper[5024]: W1128 17:24:20.953564 5024 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3775a71c_b9bd_4550_b613_113d5eb727d2.slice/crio-3a4bdbb876528c143524ef09be5662a3cc1195aa59762b9ea5e67ff56e93b6df.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3775a71c_b9bd_4550_b613_113d5eb727d2.slice/crio-3a4bdbb876528c143524ef09be5662a3cc1195aa59762b9ea5e67ff56e93b6df.scope: no such file or directory Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.536344 5024 generic.go:334] "Generic (PLEG): container finished" podID="aca9dafd-8069-42d9-b644-12fc96509330" containerID="c64d3ed6fe34d3578fb2e3b55010dea4e69b48fd200e96d8a82c7df82889991c" exitCode=0 Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.536426 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d5797c764-zffzc" event={"ID":"aca9dafd-8069-42d9-b644-12fc96509330","Type":"ContainerDied","Data":"c64d3ed6fe34d3578fb2e3b55010dea4e69b48fd200e96d8a82c7df82889991c"} Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.537035 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d5797c764-zffzc" event={"ID":"aca9dafd-8069-42d9-b644-12fc96509330","Type":"ContainerDied","Data":"716072e8b8d484f5e3823ce24d09a0c4e17556392bbd8d9cc9085b9dae4ae6e2"} Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.537050 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="716072e8b8d484f5e3823ce24d09a0c4e17556392bbd8d9cc9085b9dae4ae6e2" Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.540489 5024 generic.go:334] "Generic (PLEG): container finished" podID="769e3a29-37e1-4aa5-ae9a-c82e3efe8892" containerID="942f5aa92ef3b38b197eea45f78cb718b68a82262c39402b660a603500e760ca" exitCode=0 Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.540586 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2880-account-create-update-nzw62" event={"ID":"769e3a29-37e1-4aa5-ae9a-c82e3efe8892","Type":"ContainerDied","Data":"942f5aa92ef3b38b197eea45f78cb718b68a82262c39402b660a603500e760ca"} Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.540725 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.543824 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db9a796-d716-420d-9c0f-5ec9e4972585","Type":"ContainerStarted","Data":"d24d2d82c1369b629307b48e13f1ad08aa07f83436444fdd9e65519fa3729976"} Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.546224 5024 generic.go:334] "Generic (PLEG): container finished" podID="c497f27e-01b1-457c-bcf1-dc7652e9f771" containerID="150b88c3ea39666838ae99091bf8d61811d537eb97993c5c10421521a6d8f2bb" exitCode=0 Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.546299 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vpc6d" event={"ID":"c497f27e-01b1-457c-bcf1-dc7652e9f771","Type":"ContainerDied","Data":"150b88c3ea39666838ae99091bf8d61811d537eb97993c5c10421521a6d8f2bb"} Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.551353 5024 generic.go:334] "Generic (PLEG): container finished" podID="3775a71c-b9bd-4550-b613-113d5eb727d2" containerID="3a4bdbb876528c143524ef09be5662a3cc1195aa59762b9ea5e67ff56e93b6df" exitCode=0 Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.551643 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" event={"ID":"3775a71c-b9bd-4550-b613-113d5eb727d2","Type":"ContainerDied","Data":"3a4bdbb876528c143524ef09be5662a3cc1195aa59762b9ea5e67ff56e93b6df"} Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.624222 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-config-data\") pod \"aca9dafd-8069-42d9-b644-12fc96509330\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.624368 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-combined-ca-bundle\") pod \"aca9dafd-8069-42d9-b644-12fc96509330\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.624619 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqfv7\" (UniqueName: \"kubernetes.io/projected/aca9dafd-8069-42d9-b644-12fc96509330-kube-api-access-xqfv7\") pod \"aca9dafd-8069-42d9-b644-12fc96509330\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.624726 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-config-data-custom\") pod \"aca9dafd-8069-42d9-b644-12fc96509330\" (UID: \"aca9dafd-8069-42d9-b644-12fc96509330\") " Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.636996 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aca9dafd-8069-42d9-b644-12fc96509330" (UID: "aca9dafd-8069-42d9-b644-12fc96509330"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.641415 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca9dafd-8069-42d9-b644-12fc96509330-kube-api-access-xqfv7" (OuterVolumeSpecName: "kube-api-access-xqfv7") pod "aca9dafd-8069-42d9-b644-12fc96509330" (UID: "aca9dafd-8069-42d9-b644-12fc96509330"). InnerVolumeSpecName "kube-api-access-xqfv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.737246 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqfv7\" (UniqueName: \"kubernetes.io/projected/aca9dafd-8069-42d9-b644-12fc96509330-kube-api-access-xqfv7\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.737498 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.774478 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aca9dafd-8069-42d9-b644-12fc96509330" (UID: "aca9dafd-8069-42d9-b644-12fc96509330"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.839630 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.895215 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-config-data" (OuterVolumeSpecName: "config-data") pod "aca9dafd-8069-42d9-b644-12fc96509330" (UID: "aca9dafd-8069-42d9-b644-12fc96509330"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:21 crc kubenswrapper[5024]: I1128 17:24:21.944418 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca9dafd-8069-42d9-b644-12fc96509330-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.130157 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qqm8c" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.253488 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg6cn\" (UniqueName: \"kubernetes.io/projected/92c51dd3-21b1-4fdf-a076-64dd49fa10f9-kube-api-access-tg6cn\") pod \"92c51dd3-21b1-4fdf-a076-64dd49fa10f9\" (UID: \"92c51dd3-21b1-4fdf-a076-64dd49fa10f9\") " Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.253834 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c51dd3-21b1-4fdf-a076-64dd49fa10f9-operator-scripts\") pod \"92c51dd3-21b1-4fdf-a076-64dd49fa10f9\" (UID: \"92c51dd3-21b1-4fdf-a076-64dd49fa10f9\") " Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.254989 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c51dd3-21b1-4fdf-a076-64dd49fa10f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92c51dd3-21b1-4fdf-a076-64dd49fa10f9" (UID: "92c51dd3-21b1-4fdf-a076-64dd49fa10f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.264252 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c51dd3-21b1-4fdf-a076-64dd49fa10f9-kube-api-access-tg6cn" (OuterVolumeSpecName: "kube-api-access-tg6cn") pod "92c51dd3-21b1-4fdf-a076-64dd49fa10f9" (UID: "92c51dd3-21b1-4fdf-a076-64dd49fa10f9"). InnerVolumeSpecName "kube-api-access-tg6cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.356844 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg6cn\" (UniqueName: \"kubernetes.io/projected/92c51dd3-21b1-4fdf-a076-64dd49fa10f9-kube-api-access-tg6cn\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.356885 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c51dd3-21b1-4fdf-a076-64dd49fa10f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.443490 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6n8zs" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.452053 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1d37-account-create-update-ps7zm" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.511747 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed317ff-91e9-4a99-beae-89c81fe8b551" path="/var/lib/kubelet/pods/9ed317ff-91e9-4a99-beae-89c81fe8b551/volumes" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.560688 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f17bfb5-bf03-441a-ac54-d1e842049a41-operator-scripts\") pod \"8f17bfb5-bf03-441a-ac54-d1e842049a41\" (UID: \"8f17bfb5-bf03-441a-ac54-d1e842049a41\") " Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.560746 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4-operator-scripts\") pod \"c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4\" (UID: \"c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4\") " Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.560969 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggd8l\" (UniqueName: \"kubernetes.io/projected/c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4-kube-api-access-ggd8l\") pod \"c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4\" (UID: \"c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4\") " Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.561341 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f17bfb5-bf03-441a-ac54-d1e842049a41-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f17bfb5-bf03-441a-ac54-d1e842049a41" (UID: "8f17bfb5-bf03-441a-ac54-d1e842049a41"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.561544 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4" (UID: "c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.562183 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfdcc\" (UniqueName: \"kubernetes.io/projected/8f17bfb5-bf03-441a-ac54-d1e842049a41-kube-api-access-vfdcc\") pod \"8f17bfb5-bf03-441a-ac54-d1e842049a41\" (UID: \"8f17bfb5-bf03-441a-ac54-d1e842049a41\") " Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.563163 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f17bfb5-bf03-441a-ac54-d1e842049a41-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.563184 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.568479 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4-kube-api-access-ggd8l" (OuterVolumeSpecName: "kube-api-access-ggd8l") pod "c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4" (UID: "c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4"). InnerVolumeSpecName "kube-api-access-ggd8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.570497 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1d37-account-create-update-ps7zm" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.570620 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1d37-account-create-update-ps7zm" event={"ID":"8f17bfb5-bf03-441a-ac54-d1e842049a41","Type":"ContainerDied","Data":"1c3e56227ce15c6ca47a75775f2789af20624350cb14d1dd80ac102e6d1844a0"} Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.570697 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c3e56227ce15c6ca47a75775f2789af20624350cb14d1dd80ac102e6d1844a0" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.573501 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qqm8c" event={"ID":"92c51dd3-21b1-4fdf-a076-64dd49fa10f9","Type":"ContainerDied","Data":"4879886f6a1321b30406c126ecac534355b98f44a76894779694abe9af36968a"} Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.573546 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4879886f6a1321b30406c126ecac534355b98f44a76894779694abe9af36968a" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.574062 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qqm8c" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.577264 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f17bfb5-bf03-441a-ac54-d1e842049a41-kube-api-access-vfdcc" (OuterVolumeSpecName: "kube-api-access-vfdcc") pod "8f17bfb5-bf03-441a-ac54-d1e842049a41" (UID: "8f17bfb5-bf03-441a-ac54-d1e842049a41"). InnerVolumeSpecName "kube-api-access-vfdcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.578902 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6n8zs" event={"ID":"c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4","Type":"ContainerDied","Data":"a7db1b1c05664d0657b1b8cc60298259921aac248d781b42981bf51e43125438"} Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.578941 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7db1b1c05664d0657b1b8cc60298259921aac248d781b42981bf51e43125438" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.579237 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6n8zs" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.584787 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d5797c764-zffzc" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.585714 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db9a796-d716-420d-9c0f-5ec9e4972585","Type":"ContainerStarted","Data":"41f54bb3994ff210f5c35e67e4d9fef570e3a33050e2834a5ab6fd219d60ca35"} Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.665529 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggd8l\" (UniqueName: \"kubernetes.io/projected/c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4-kube-api-access-ggd8l\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.665560 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfdcc\" (UniqueName: \"kubernetes.io/projected/8f17bfb5-bf03-441a-ac54-d1e842049a41-kube-api-access-vfdcc\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.674734 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-d5797c764-zffzc"] Nov 28 17:24:22 crc kubenswrapper[5024]: I1128 17:24:22.688343 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-d5797c764-zffzc"] Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.063529 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vpc6d" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.101736 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.168690 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2880-account-create-update-nzw62" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.178628 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc2cz\" (UniqueName: \"kubernetes.io/projected/c497f27e-01b1-457c-bcf1-dc7652e9f771-kube-api-access-qc2cz\") pod \"c497f27e-01b1-457c-bcf1-dc7652e9f771\" (UID: \"c497f27e-01b1-457c-bcf1-dc7652e9f771\") " Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.178735 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c497f27e-01b1-457c-bcf1-dc7652e9f771-operator-scripts\") pod \"c497f27e-01b1-457c-bcf1-dc7652e9f771\" (UID: \"c497f27e-01b1-457c-bcf1-dc7652e9f771\") " Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.181786 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c497f27e-01b1-457c-bcf1-dc7652e9f771-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c497f27e-01b1-457c-bcf1-dc7652e9f771" (UID: "c497f27e-01b1-457c-bcf1-dc7652e9f771"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.182512 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c497f27e-01b1-457c-bcf1-dc7652e9f771-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.183870 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.196516 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.199334 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c497f27e-01b1-457c-bcf1-dc7652e9f771-kube-api-access-qc2cz" (OuterVolumeSpecName: "kube-api-access-qc2cz") pod "c497f27e-01b1-457c-bcf1-dc7652e9f771" (UID: "c497f27e-01b1-457c-bcf1-dc7652e9f771"). InnerVolumeSpecName "kube-api-access-qc2cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.284255 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdj7w\" (UniqueName: \"kubernetes.io/projected/3775a71c-b9bd-4550-b613-113d5eb727d2-kube-api-access-vdj7w\") pod \"3775a71c-b9bd-4550-b613-113d5eb727d2\" (UID: \"3775a71c-b9bd-4550-b613-113d5eb727d2\") " Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.284389 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4jq4\" (UniqueName: \"kubernetes.io/projected/769e3a29-37e1-4aa5-ae9a-c82e3efe8892-kube-api-access-b4jq4\") pod \"769e3a29-37e1-4aa5-ae9a-c82e3efe8892\" (UID: \"769e3a29-37e1-4aa5-ae9a-c82e3efe8892\") " Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.284671 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/769e3a29-37e1-4aa5-ae9a-c82e3efe8892-operator-scripts\") pod \"769e3a29-37e1-4aa5-ae9a-c82e3efe8892\" (UID: \"769e3a29-37e1-4aa5-ae9a-c82e3efe8892\") " Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.284704 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3775a71c-b9bd-4550-b613-113d5eb727d2-operator-scripts\") pod \"3775a71c-b9bd-4550-b613-113d5eb727d2\" (UID: \"3775a71c-b9bd-4550-b613-113d5eb727d2\") " Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.285451 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc2cz\" (UniqueName: \"kubernetes.io/projected/c497f27e-01b1-457c-bcf1-dc7652e9f771-kube-api-access-qc2cz\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.285821 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/769e3a29-37e1-4aa5-ae9a-c82e3efe8892-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "769e3a29-37e1-4aa5-ae9a-c82e3efe8892" (UID: "769e3a29-37e1-4aa5-ae9a-c82e3efe8892"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.285851 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3775a71c-b9bd-4550-b613-113d5eb727d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3775a71c-b9bd-4550-b613-113d5eb727d2" (UID: "3775a71c-b9bd-4550-b613-113d5eb727d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.289203 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/769e3a29-37e1-4aa5-ae9a-c82e3efe8892-kube-api-access-b4jq4" (OuterVolumeSpecName: "kube-api-access-b4jq4") pod "769e3a29-37e1-4aa5-ae9a-c82e3efe8892" (UID: "769e3a29-37e1-4aa5-ae9a-c82e3efe8892"). InnerVolumeSpecName "kube-api-access-b4jq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.289342 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3775a71c-b9bd-4550-b613-113d5eb727d2-kube-api-access-vdj7w" (OuterVolumeSpecName: "kube-api-access-vdj7w") pod "3775a71c-b9bd-4550-b613-113d5eb727d2" (UID: "3775a71c-b9bd-4550-b613-113d5eb727d2"). InnerVolumeSpecName "kube-api-access-vdj7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.367180 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wbkl5"] Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.392363 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/769e3a29-37e1-4aa5-ae9a-c82e3efe8892-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.392678 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3775a71c-b9bd-4550-b613-113d5eb727d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.392695 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdj7w\" (UniqueName: \"kubernetes.io/projected/3775a71c-b9bd-4550-b613-113d5eb727d2-kube-api-access-vdj7w\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.393009 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4jq4\" (UniqueName: \"kubernetes.io/projected/769e3a29-37e1-4aa5-ae9a-c82e3efe8892-kube-api-access-b4jq4\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.596633 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2880-account-create-update-nzw62" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.596631 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2880-account-create-update-nzw62" event={"ID":"769e3a29-37e1-4aa5-ae9a-c82e3efe8892","Type":"ContainerDied","Data":"f549141f02de5d8d4ce6dc8fdb9d5053318f19cc92da237c795ba5cdbb127ee9"} Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.596689 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f549141f02de5d8d4ce6dc8fdb9d5053318f19cc92da237c795ba5cdbb127ee9" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.601502 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db9a796-d716-420d-9c0f-5ec9e4972585","Type":"ContainerStarted","Data":"665af5b5bcc53c6c5bd3b3f56acdf863e1c6ee4ebb549976b673621475e62806"} Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.601673 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="ceilometer-central-agent" containerID="cri-o://075eda0a4905110118d2d5c317ad91f7289a2bf3b0e58aa9b27513d844ae66d4" gracePeriod=30 Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.601914 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.602114 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="proxy-httpd" containerID="cri-o://665af5b5bcc53c6c5bd3b3f56acdf863e1c6ee4ebb549976b673621475e62806" gracePeriod=30 Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.602211 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="sg-core" containerID="cri-o://41f54bb3994ff210f5c35e67e4d9fef570e3a33050e2834a5ab6fd219d60ca35" gracePeriod=30 Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.602190 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="ceilometer-notification-agent" containerID="cri-o://d24d2d82c1369b629307b48e13f1ad08aa07f83436444fdd9e65519fa3729976" gracePeriod=30 Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.612371 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vpc6d" event={"ID":"c497f27e-01b1-457c-bcf1-dc7652e9f771","Type":"ContainerDied","Data":"311862bfb3898d2444d408558038b82f99c0b197b1d07f01e3254c0bad07bcae"} Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.612423 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="311862bfb3898d2444d408558038b82f99c0b197b1d07f01e3254c0bad07bcae" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.612504 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vpc6d" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.617342 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.617384 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c5b8-account-create-update-p7vd8" event={"ID":"3775a71c-b9bd-4550-b613-113d5eb727d2","Type":"ContainerDied","Data":"feed579bb4d9016395f370dc017827b5e9dc2a8a59e251fbe233c395e0aa2ffd"} Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.617416 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feed579bb4d9016395f370dc017827b5e9dc2a8a59e251fbe233c395e0aa2ffd" Nov 28 17:24:23 crc kubenswrapper[5024]: I1128 17:24:23.660210 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.738516148 podStartE2EDuration="7.658996101s" podCreationTimestamp="2025-11-28 17:24:16 +0000 UTC" firstStartedPulling="2025-11-28 17:24:18.319466052 +0000 UTC m=+1560.368386957" lastFinishedPulling="2025-11-28 17:24:23.239946005 +0000 UTC m=+1565.288866910" observedRunningTime="2025-11-28 17:24:23.637130173 +0000 UTC m=+1565.686051088" watchObservedRunningTime="2025-11-28 17:24:23.658996101 +0000 UTC m=+1565.707917006" Nov 28 17:24:24 crc kubenswrapper[5024]: I1128 17:24:24.517312 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca9dafd-8069-42d9-b644-12fc96509330" path="/var/lib/kubelet/pods/aca9dafd-8069-42d9-b644-12fc96509330/volumes" Nov 28 17:24:24 crc kubenswrapper[5024]: I1128 17:24:24.632028 5024 generic.go:334] "Generic (PLEG): container finished" podID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerID="41f54bb3994ff210f5c35e67e4d9fef570e3a33050e2834a5ab6fd219d60ca35" exitCode=2 Nov 28 17:24:24 crc kubenswrapper[5024]: I1128 17:24:24.632062 5024 generic.go:334] "Generic (PLEG): container finished" podID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerID="d24d2d82c1369b629307b48e13f1ad08aa07f83436444fdd9e65519fa3729976" exitCode=0 Nov 28 17:24:24 crc kubenswrapper[5024]: I1128 17:24:24.632063 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db9a796-d716-420d-9c0f-5ec9e4972585","Type":"ContainerDied","Data":"41f54bb3994ff210f5c35e67e4d9fef570e3a33050e2834a5ab6fd219d60ca35"} Nov 28 17:24:24 crc kubenswrapper[5024]: I1128 17:24:24.632130 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db9a796-d716-420d-9c0f-5ec9e4972585","Type":"ContainerDied","Data":"d24d2d82c1369b629307b48e13f1ad08aa07f83436444fdd9e65519fa3729976"} Nov 28 17:24:24 crc kubenswrapper[5024]: I1128 17:24:24.632263 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wbkl5" podUID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" containerName="registry-server" containerID="cri-o://f819444ee1344f64c4023c683326591f763652d6dfbe0bf2896b21093215cabe" gracePeriod=2 Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.222569 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.333902 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xmr4\" (UniqueName: \"kubernetes.io/projected/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-kube-api-access-7xmr4\") pod \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\" (UID: \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\") " Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.333998 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-utilities\") pod \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\" (UID: \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\") " Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.334245 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-catalog-content\") pod \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\" (UID: \"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda\") " Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.334790 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-utilities" (OuterVolumeSpecName: "utilities") pod "4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" (UID: "4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.354372 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-kube-api-access-7xmr4" (OuterVolumeSpecName: "kube-api-access-7xmr4") pod "4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" (UID: "4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda"). InnerVolumeSpecName "kube-api-access-7xmr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.436602 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xmr4\" (UniqueName: \"kubernetes.io/projected/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-kube-api-access-7xmr4\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.436644 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.445495 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" (UID: "4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.539766 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.647535 5024 generic.go:334] "Generic (PLEG): container finished" podID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" containerID="f819444ee1344f64c4023c683326591f763652d6dfbe0bf2896b21093215cabe" exitCode=0 Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.647582 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wbkl5" event={"ID":"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda","Type":"ContainerDied","Data":"f819444ee1344f64c4023c683326591f763652d6dfbe0bf2896b21093215cabe"} Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.647610 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wbkl5" event={"ID":"4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda","Type":"ContainerDied","Data":"f0d5b146bc510869d0750443c71776c9a6f85edbe8418e227daac8003662a4a0"} Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.647625 5024 scope.go:117] "RemoveContainer" containerID="f819444ee1344f64c4023c683326591f763652d6dfbe0bf2896b21093215cabe" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.647781 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wbkl5" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.689757 5024 scope.go:117] "RemoveContainer" containerID="dbc6fd54ff2ea252838163385d27ea88fbfbda8e78f78f4d72c0834d47bfeb5e" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.697352 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wbkl5"] Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.708563 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wbkl5"] Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.721336 5024 scope.go:117] "RemoveContainer" containerID="4818bc47561afbb61a5c58abf047f386a465103b3671c7eaa76f5e80724fded0" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.782715 5024 scope.go:117] "RemoveContainer" containerID="f819444ee1344f64c4023c683326591f763652d6dfbe0bf2896b21093215cabe" Nov 28 17:24:25 crc kubenswrapper[5024]: E1128 17:24:25.783321 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f819444ee1344f64c4023c683326591f763652d6dfbe0bf2896b21093215cabe\": container with ID starting with f819444ee1344f64c4023c683326591f763652d6dfbe0bf2896b21093215cabe not found: ID does not exist" containerID="f819444ee1344f64c4023c683326591f763652d6dfbe0bf2896b21093215cabe" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.783399 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f819444ee1344f64c4023c683326591f763652d6dfbe0bf2896b21093215cabe"} err="failed to get container status \"f819444ee1344f64c4023c683326591f763652d6dfbe0bf2896b21093215cabe\": rpc error: code = NotFound desc = could not find container \"f819444ee1344f64c4023c683326591f763652d6dfbe0bf2896b21093215cabe\": container with ID starting with f819444ee1344f64c4023c683326591f763652d6dfbe0bf2896b21093215cabe not found: ID does not exist" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.783431 5024 scope.go:117] "RemoveContainer" containerID="dbc6fd54ff2ea252838163385d27ea88fbfbda8e78f78f4d72c0834d47bfeb5e" Nov 28 17:24:25 crc kubenswrapper[5024]: E1128 17:24:25.783898 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbc6fd54ff2ea252838163385d27ea88fbfbda8e78f78f4d72c0834d47bfeb5e\": container with ID starting with dbc6fd54ff2ea252838163385d27ea88fbfbda8e78f78f4d72c0834d47bfeb5e not found: ID does not exist" containerID="dbc6fd54ff2ea252838163385d27ea88fbfbda8e78f78f4d72c0834d47bfeb5e" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.783943 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbc6fd54ff2ea252838163385d27ea88fbfbda8e78f78f4d72c0834d47bfeb5e"} err="failed to get container status \"dbc6fd54ff2ea252838163385d27ea88fbfbda8e78f78f4d72c0834d47bfeb5e\": rpc error: code = NotFound desc = could not find container \"dbc6fd54ff2ea252838163385d27ea88fbfbda8e78f78f4d72c0834d47bfeb5e\": container with ID starting with dbc6fd54ff2ea252838163385d27ea88fbfbda8e78f78f4d72c0834d47bfeb5e not found: ID does not exist" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.783971 5024 scope.go:117] "RemoveContainer" containerID="4818bc47561afbb61a5c58abf047f386a465103b3671c7eaa76f5e80724fded0" Nov 28 17:24:25 crc kubenswrapper[5024]: E1128 17:24:25.784745 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4818bc47561afbb61a5c58abf047f386a465103b3671c7eaa76f5e80724fded0\": container with ID starting with 4818bc47561afbb61a5c58abf047f386a465103b3671c7eaa76f5e80724fded0 not found: ID does not exist" containerID="4818bc47561afbb61a5c58abf047f386a465103b3671c7eaa76f5e80724fded0" Nov 28 17:24:25 crc kubenswrapper[5024]: I1128 17:24:25.784788 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4818bc47561afbb61a5c58abf047f386a465103b3671c7eaa76f5e80724fded0"} err="failed to get container status \"4818bc47561afbb61a5c58abf047f386a465103b3671c7eaa76f5e80724fded0\": rpc error: code = NotFound desc = could not find container \"4818bc47561afbb61a5c58abf047f386a465103b3671c7eaa76f5e80724fded0\": container with ID starting with 4818bc47561afbb61a5c58abf047f386a465103b3671c7eaa76f5e80724fded0 not found: ID does not exist" Nov 28 17:24:26 crc kubenswrapper[5024]: I1128 17:24:26.516552 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" path="/var/lib/kubelet/pods/4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda/volumes" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.773144 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-phwfj"] Nov 28 17:24:27 crc kubenswrapper[5024]: E1128 17:24:27.773661 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed317ff-91e9-4a99-beae-89c81fe8b551" containerName="extract-content" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.773677 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed317ff-91e9-4a99-beae-89c81fe8b551" containerName="extract-content" Nov 28 17:24:27 crc kubenswrapper[5024]: E1128 17:24:27.773693 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="769e3a29-37e1-4aa5-ae9a-c82e3efe8892" containerName="mariadb-account-create-update" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.773699 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="769e3a29-37e1-4aa5-ae9a-c82e3efe8892" containerName="mariadb-account-create-update" Nov 28 17:24:27 crc kubenswrapper[5024]: E1128 17:24:27.773711 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3775a71c-b9bd-4550-b613-113d5eb727d2" containerName="mariadb-account-create-update" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.773718 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3775a71c-b9bd-4550-b613-113d5eb727d2" containerName="mariadb-account-create-update" Nov 28 17:24:27 crc kubenswrapper[5024]: E1128 17:24:27.773733 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" containerName="extract-utilities" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.773740 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" containerName="extract-utilities" Nov 28 17:24:27 crc kubenswrapper[5024]: E1128 17:24:27.773749 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4" containerName="mariadb-database-create" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.773755 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4" containerName="mariadb-database-create" Nov 28 17:24:27 crc kubenswrapper[5024]: E1128 17:24:27.773772 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" containerName="registry-server" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.773779 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" containerName="registry-server" Nov 28 17:24:27 crc kubenswrapper[5024]: E1128 17:24:27.773796 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed317ff-91e9-4a99-beae-89c81fe8b551" containerName="registry-server" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.773803 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed317ff-91e9-4a99-beae-89c81fe8b551" containerName="registry-server" Nov 28 17:24:27 crc kubenswrapper[5024]: E1128 17:24:27.773819 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca9dafd-8069-42d9-b644-12fc96509330" containerName="heat-engine" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.773827 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca9dafd-8069-42d9-b644-12fc96509330" containerName="heat-engine" Nov 28 17:24:27 crc kubenswrapper[5024]: E1128 17:24:27.773843 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f17bfb5-bf03-441a-ac54-d1e842049a41" containerName="mariadb-account-create-update" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.773851 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f17bfb5-bf03-441a-ac54-d1e842049a41" containerName="mariadb-account-create-update" Nov 28 17:24:27 crc kubenswrapper[5024]: E1128 17:24:27.773873 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c497f27e-01b1-457c-bcf1-dc7652e9f771" containerName="mariadb-database-create" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.773880 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c497f27e-01b1-457c-bcf1-dc7652e9f771" containerName="mariadb-database-create" Nov 28 17:24:27 crc kubenswrapper[5024]: E1128 17:24:27.773895 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" containerName="extract-content" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.773903 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" containerName="extract-content" Nov 28 17:24:27 crc kubenswrapper[5024]: E1128 17:24:27.773921 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed317ff-91e9-4a99-beae-89c81fe8b551" containerName="extract-utilities" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.773927 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed317ff-91e9-4a99-beae-89c81fe8b551" containerName="extract-utilities" Nov 28 17:24:27 crc kubenswrapper[5024]: E1128 17:24:27.773939 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c51dd3-21b1-4fdf-a076-64dd49fa10f9" containerName="mariadb-database-create" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.773945 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c51dd3-21b1-4fdf-a076-64dd49fa10f9" containerName="mariadb-database-create" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.774171 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca9dafd-8069-42d9-b644-12fc96509330" containerName="heat-engine" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.774195 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed317ff-91e9-4a99-beae-89c81fe8b551" containerName="registry-server" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.774213 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1ec3a6-daa0-4161-b1ef-23b0fd17ecda" containerName="registry-server" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.774221 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c497f27e-01b1-457c-bcf1-dc7652e9f771" containerName="mariadb-database-create" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.774234 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c51dd3-21b1-4fdf-a076-64dd49fa10f9" containerName="mariadb-database-create" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.774240 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="3775a71c-b9bd-4550-b613-113d5eb727d2" containerName="mariadb-account-create-update" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.774254 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f17bfb5-bf03-441a-ac54-d1e842049a41" containerName="mariadb-account-create-update" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.774263 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4" containerName="mariadb-database-create" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.774273 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="769e3a29-37e1-4aa5-ae9a-c82e3efe8892" containerName="mariadb-account-create-update" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.775191 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.782092 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.783256 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ljqsn" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.785858 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.796654 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-phwfj"] Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.890130 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4vx5\" (UniqueName: \"kubernetes.io/projected/4dd5b297-8471-4749-aa89-a9d163073420-kube-api-access-s4vx5\") pod \"nova-cell0-conductor-db-sync-phwfj\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.890178 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-phwfj\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.890218 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-config-data\") pod \"nova-cell0-conductor-db-sync-phwfj\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.890774 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-scripts\") pod \"nova-cell0-conductor-db-sync-phwfj\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.992806 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4vx5\" (UniqueName: \"kubernetes.io/projected/4dd5b297-8471-4749-aa89-a9d163073420-kube-api-access-s4vx5\") pod \"nova-cell0-conductor-db-sync-phwfj\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.993324 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-phwfj\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.994465 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-config-data\") pod \"nova-cell0-conductor-db-sync-phwfj\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:27 crc kubenswrapper[5024]: I1128 17:24:27.994931 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-scripts\") pod \"nova-cell0-conductor-db-sync-phwfj\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:28 crc kubenswrapper[5024]: I1128 17:24:28.003992 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-phwfj\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:28 crc kubenswrapper[5024]: I1128 17:24:28.006625 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-scripts\") pod \"nova-cell0-conductor-db-sync-phwfj\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:28 crc kubenswrapper[5024]: I1128 17:24:28.008705 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-config-data\") pod \"nova-cell0-conductor-db-sync-phwfj\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:28 crc kubenswrapper[5024]: I1128 17:24:28.017675 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4vx5\" (UniqueName: \"kubernetes.io/projected/4dd5b297-8471-4749-aa89-a9d163073420-kube-api-access-s4vx5\") pod \"nova-cell0-conductor-db-sync-phwfj\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:28 crc kubenswrapper[5024]: I1128 17:24:28.095984 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:28 crc kubenswrapper[5024]: I1128 17:24:28.700618 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-phwfj"] Nov 28 17:24:29 crc kubenswrapper[5024]: I1128 17:24:29.703664 5024 generic.go:334] "Generic (PLEG): container finished" podID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerID="075eda0a4905110118d2d5c317ad91f7289a2bf3b0e58aa9b27513d844ae66d4" exitCode=0 Nov 28 17:24:29 crc kubenswrapper[5024]: I1128 17:24:29.703812 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db9a796-d716-420d-9c0f-5ec9e4972585","Type":"ContainerDied","Data":"075eda0a4905110118d2d5c317ad91f7289a2bf3b0e58aa9b27513d844ae66d4"} Nov 28 17:24:29 crc kubenswrapper[5024]: I1128 17:24:29.706179 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-phwfj" event={"ID":"4dd5b297-8471-4749-aa89-a9d163073420","Type":"ContainerStarted","Data":"3a87417507c6c3b037b9141346ea58b754abe692d214aad5be41c96c93b1a16d"} Nov 28 17:24:37 crc kubenswrapper[5024]: I1128 17:24:37.564640 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:24:37 crc kubenswrapper[5024]: I1128 17:24:37.565117 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:24:38 crc kubenswrapper[5024]: I1128 17:24:38.833586 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-phwfj" event={"ID":"4dd5b297-8471-4749-aa89-a9d163073420","Type":"ContainerStarted","Data":"31e037aa45fa58b41822ed8b464db589b4b19b31283b87a28e049271ba80d3b1"} Nov 28 17:24:38 crc kubenswrapper[5024]: I1128 17:24:38.856321 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-phwfj" podStartSLOduration=2.1887882850000002 podStartE2EDuration="11.856299579s" podCreationTimestamp="2025-11-28 17:24:27 +0000 UTC" firstStartedPulling="2025-11-28 17:24:28.711190407 +0000 UTC m=+1570.760111312" lastFinishedPulling="2025-11-28 17:24:38.378701701 +0000 UTC m=+1580.427622606" observedRunningTime="2025-11-28 17:24:38.848649409 +0000 UTC m=+1580.897570304" watchObservedRunningTime="2025-11-28 17:24:38.856299579 +0000 UTC m=+1580.905220484" Nov 28 17:24:46 crc kubenswrapper[5024]: I1128 17:24:46.848463 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.224631 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-jdpnl"] Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.226965 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jdpnl" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.305145 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-jdpnl"] Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.321493 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9381983e-e11c-477e-85b0-df124ae29b32-operator-scripts\") pod \"aodh-db-create-jdpnl\" (UID: \"9381983e-e11c-477e-85b0-df124ae29b32\") " pod="openstack/aodh-db-create-jdpnl" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.322553 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5l6w\" (UniqueName: \"kubernetes.io/projected/9381983e-e11c-477e-85b0-df124ae29b32-kube-api-access-z5l6w\") pod \"aodh-db-create-jdpnl\" (UID: \"9381983e-e11c-477e-85b0-df124ae29b32\") " pod="openstack/aodh-db-create-jdpnl" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.347536 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-4beb-account-create-update-dzhlc"] Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.349383 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4beb-account-create-update-dzhlc" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.351374 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.369152 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-4beb-account-create-update-dzhlc"] Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.425508 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c024930-e403-4410-9f11-4c7bc67711cd-operator-scripts\") pod \"aodh-4beb-account-create-update-dzhlc\" (UID: \"0c024930-e403-4410-9f11-4c7bc67711cd\") " pod="openstack/aodh-4beb-account-create-update-dzhlc" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.425584 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9381983e-e11c-477e-85b0-df124ae29b32-operator-scripts\") pod \"aodh-db-create-jdpnl\" (UID: \"9381983e-e11c-477e-85b0-df124ae29b32\") " pod="openstack/aodh-db-create-jdpnl" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.425671 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hcv4\" (UniqueName: \"kubernetes.io/projected/0c024930-e403-4410-9f11-4c7bc67711cd-kube-api-access-2hcv4\") pod \"aodh-4beb-account-create-update-dzhlc\" (UID: \"0c024930-e403-4410-9f11-4c7bc67711cd\") " pod="openstack/aodh-4beb-account-create-update-dzhlc" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.425722 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5l6w\" (UniqueName: \"kubernetes.io/projected/9381983e-e11c-477e-85b0-df124ae29b32-kube-api-access-z5l6w\") pod \"aodh-db-create-jdpnl\" (UID: \"9381983e-e11c-477e-85b0-df124ae29b32\") " pod="openstack/aodh-db-create-jdpnl" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.426371 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9381983e-e11c-477e-85b0-df124ae29b32-operator-scripts\") pod \"aodh-db-create-jdpnl\" (UID: \"9381983e-e11c-477e-85b0-df124ae29b32\") " pod="openstack/aodh-db-create-jdpnl" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.454082 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5l6w\" (UniqueName: \"kubernetes.io/projected/9381983e-e11c-477e-85b0-df124ae29b32-kube-api-access-z5l6w\") pod \"aodh-db-create-jdpnl\" (UID: \"9381983e-e11c-477e-85b0-df124ae29b32\") " pod="openstack/aodh-db-create-jdpnl" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.528566 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hcv4\" (UniqueName: \"kubernetes.io/projected/0c024930-e403-4410-9f11-4c7bc67711cd-kube-api-access-2hcv4\") pod \"aodh-4beb-account-create-update-dzhlc\" (UID: \"0c024930-e403-4410-9f11-4c7bc67711cd\") " pod="openstack/aodh-4beb-account-create-update-dzhlc" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.528762 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c024930-e403-4410-9f11-4c7bc67711cd-operator-scripts\") pod \"aodh-4beb-account-create-update-dzhlc\" (UID: \"0c024930-e403-4410-9f11-4c7bc67711cd\") " pod="openstack/aodh-4beb-account-create-update-dzhlc" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.529760 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c024930-e403-4410-9f11-4c7bc67711cd-operator-scripts\") pod \"aodh-4beb-account-create-update-dzhlc\" (UID: \"0c024930-e403-4410-9f11-4c7bc67711cd\") " pod="openstack/aodh-4beb-account-create-update-dzhlc" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.550366 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hcv4\" (UniqueName: \"kubernetes.io/projected/0c024930-e403-4410-9f11-4c7bc67711cd-kube-api-access-2hcv4\") pod \"aodh-4beb-account-create-update-dzhlc\" (UID: \"0c024930-e403-4410-9f11-4c7bc67711cd\") " pod="openstack/aodh-4beb-account-create-update-dzhlc" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.559522 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jdpnl" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.668285 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4beb-account-create-update-dzhlc" Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.953111 5024 generic.go:334] "Generic (PLEG): container finished" podID="4dd5b297-8471-4749-aa89-a9d163073420" containerID="31e037aa45fa58b41822ed8b464db589b4b19b31283b87a28e049271ba80d3b1" exitCode=0 Nov 28 17:24:49 crc kubenswrapper[5024]: I1128 17:24:49.953398 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-phwfj" event={"ID":"4dd5b297-8471-4749-aa89-a9d163073420","Type":"ContainerDied","Data":"31e037aa45fa58b41822ed8b464db589b4b19b31283b87a28e049271ba80d3b1"} Nov 28 17:24:50 crc kubenswrapper[5024]: I1128 17:24:50.204421 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-jdpnl"] Nov 28 17:24:50 crc kubenswrapper[5024]: I1128 17:24:50.348062 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-4beb-account-create-update-dzhlc"] Nov 28 17:24:50 crc kubenswrapper[5024]: W1128 17:24:50.348265 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c024930_e403_4410_9f11_4c7bc67711cd.slice/crio-7dff789741730d678761532680f0d72cab625a78a2897da64835ca6d7f6f0e11 WatchSource:0}: Error finding container 7dff789741730d678761532680f0d72cab625a78a2897da64835ca6d7f6f0e11: Status 404 returned error can't find the container with id 7dff789741730d678761532680f0d72cab625a78a2897da64835ca6d7f6f0e11 Nov 28 17:24:50 crc kubenswrapper[5024]: I1128 17:24:50.964724 5024 generic.go:334] "Generic (PLEG): container finished" podID="9381983e-e11c-477e-85b0-df124ae29b32" containerID="a1a93464b8e3bd4e46a87aa13e34a300529171ae545357714f907c837ebd51e4" exitCode=0 Nov 28 17:24:50 crc kubenswrapper[5024]: I1128 17:24:50.964808 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jdpnl" event={"ID":"9381983e-e11c-477e-85b0-df124ae29b32","Type":"ContainerDied","Data":"a1a93464b8e3bd4e46a87aa13e34a300529171ae545357714f907c837ebd51e4"} Nov 28 17:24:50 crc kubenswrapper[5024]: I1128 17:24:50.964842 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jdpnl" event={"ID":"9381983e-e11c-477e-85b0-df124ae29b32","Type":"ContainerStarted","Data":"b96c1fe0ee5f64c92f6110dc3b3353cf9346473123a2b8637a7b1d3ef465c0c7"} Nov 28 17:24:50 crc kubenswrapper[5024]: I1128 17:24:50.966318 5024 generic.go:334] "Generic (PLEG): container finished" podID="0c024930-e403-4410-9f11-4c7bc67711cd" containerID="dd8bfb9f6a1150e9791594cfedc584c31750a90d4ce2f2bfc8ba3b21b1337d63" exitCode=0 Nov 28 17:24:50 crc kubenswrapper[5024]: I1128 17:24:50.966356 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4beb-account-create-update-dzhlc" event={"ID":"0c024930-e403-4410-9f11-4c7bc67711cd","Type":"ContainerDied","Data":"dd8bfb9f6a1150e9791594cfedc584c31750a90d4ce2f2bfc8ba3b21b1337d63"} Nov 28 17:24:50 crc kubenswrapper[5024]: I1128 17:24:50.966380 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4beb-account-create-update-dzhlc" event={"ID":"0c024930-e403-4410-9f11-4c7bc67711cd","Type":"ContainerStarted","Data":"7dff789741730d678761532680f0d72cab625a78a2897da64835ca6d7f6f0e11"} Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.487169 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.609298 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-scripts\") pod \"4dd5b297-8471-4749-aa89-a9d163073420\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.609622 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-combined-ca-bundle\") pod \"4dd5b297-8471-4749-aa89-a9d163073420\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.609878 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4vx5\" (UniqueName: \"kubernetes.io/projected/4dd5b297-8471-4749-aa89-a9d163073420-kube-api-access-s4vx5\") pod \"4dd5b297-8471-4749-aa89-a9d163073420\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.610003 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-config-data\") pod \"4dd5b297-8471-4749-aa89-a9d163073420\" (UID: \"4dd5b297-8471-4749-aa89-a9d163073420\") " Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.615557 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-scripts" (OuterVolumeSpecName: "scripts") pod "4dd5b297-8471-4749-aa89-a9d163073420" (UID: "4dd5b297-8471-4749-aa89-a9d163073420"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.616105 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd5b297-8471-4749-aa89-a9d163073420-kube-api-access-s4vx5" (OuterVolumeSpecName: "kube-api-access-s4vx5") pod "4dd5b297-8471-4749-aa89-a9d163073420" (UID: "4dd5b297-8471-4749-aa89-a9d163073420"). InnerVolumeSpecName "kube-api-access-s4vx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.647870 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dd5b297-8471-4749-aa89-a9d163073420" (UID: "4dd5b297-8471-4749-aa89-a9d163073420"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.654506 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-config-data" (OuterVolumeSpecName: "config-data") pod "4dd5b297-8471-4749-aa89-a9d163073420" (UID: "4dd5b297-8471-4749-aa89-a9d163073420"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.713752 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4vx5\" (UniqueName: \"kubernetes.io/projected/4dd5b297-8471-4749-aa89-a9d163073420-kube-api-access-s4vx5\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.714274 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.714284 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.714293 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd5b297-8471-4749-aa89-a9d163073420-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.979501 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-phwfj" event={"ID":"4dd5b297-8471-4749-aa89-a9d163073420","Type":"ContainerDied","Data":"3a87417507c6c3b037b9141346ea58b754abe692d214aad5be41c96c93b1a16d"} Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.979568 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a87417507c6c3b037b9141346ea58b754abe692d214aad5be41c96c93b1a16d" Nov 28 17:24:51 crc kubenswrapper[5024]: I1128 17:24:51.979783 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-phwfj" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.129664 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 17:24:52 crc kubenswrapper[5024]: E1128 17:24:52.130900 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd5b297-8471-4749-aa89-a9d163073420" containerName="nova-cell0-conductor-db-sync" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.130917 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd5b297-8471-4749-aa89-a9d163073420" containerName="nova-cell0-conductor-db-sync" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.136895 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd5b297-8471-4749-aa89-a9d163073420" containerName="nova-cell0-conductor-db-sync" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.140976 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.148894 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ljqsn" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.154666 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.202254 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.271969 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a9aedd-7e31-4c76-8ca8-65ede667175e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"06a9aedd-7e31-4c76-8ca8-65ede667175e\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.277556 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a9aedd-7e31-4c76-8ca8-65ede667175e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"06a9aedd-7e31-4c76-8ca8-65ede667175e\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.277648 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8ch6\" (UniqueName: \"kubernetes.io/projected/06a9aedd-7e31-4c76-8ca8-65ede667175e-kube-api-access-f8ch6\") pod \"nova-cell0-conductor-0\" (UID: \"06a9aedd-7e31-4c76-8ca8-65ede667175e\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.379975 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a9aedd-7e31-4c76-8ca8-65ede667175e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"06a9aedd-7e31-4c76-8ca8-65ede667175e\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.380301 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a9aedd-7e31-4c76-8ca8-65ede667175e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"06a9aedd-7e31-4c76-8ca8-65ede667175e\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.380337 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8ch6\" (UniqueName: \"kubernetes.io/projected/06a9aedd-7e31-4c76-8ca8-65ede667175e-kube-api-access-f8ch6\") pod \"nova-cell0-conductor-0\" (UID: \"06a9aedd-7e31-4c76-8ca8-65ede667175e\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.387119 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a9aedd-7e31-4c76-8ca8-65ede667175e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"06a9aedd-7e31-4c76-8ca8-65ede667175e\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.388497 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a9aedd-7e31-4c76-8ca8-65ede667175e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"06a9aedd-7e31-4c76-8ca8-65ede667175e\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.400557 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8ch6\" (UniqueName: \"kubernetes.io/projected/06a9aedd-7e31-4c76-8ca8-65ede667175e-kube-api-access-f8ch6\") pod \"nova-cell0-conductor-0\" (UID: \"06a9aedd-7e31-4c76-8ca8-65ede667175e\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.488577 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.609296 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jdpnl" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.689485 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5l6w\" (UniqueName: \"kubernetes.io/projected/9381983e-e11c-477e-85b0-df124ae29b32-kube-api-access-z5l6w\") pod \"9381983e-e11c-477e-85b0-df124ae29b32\" (UID: \"9381983e-e11c-477e-85b0-df124ae29b32\") " Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.689748 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9381983e-e11c-477e-85b0-df124ae29b32-operator-scripts\") pod \"9381983e-e11c-477e-85b0-df124ae29b32\" (UID: \"9381983e-e11c-477e-85b0-df124ae29b32\") " Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.690897 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9381983e-e11c-477e-85b0-df124ae29b32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9381983e-e11c-477e-85b0-df124ae29b32" (UID: "9381983e-e11c-477e-85b0-df124ae29b32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.691554 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9381983e-e11c-477e-85b0-df124ae29b32-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.695828 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9381983e-e11c-477e-85b0-df124ae29b32-kube-api-access-z5l6w" (OuterVolumeSpecName: "kube-api-access-z5l6w") pod "9381983e-e11c-477e-85b0-df124ae29b32" (UID: "9381983e-e11c-477e-85b0-df124ae29b32"). InnerVolumeSpecName "kube-api-access-z5l6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.793391 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5l6w\" (UniqueName: \"kubernetes.io/projected/9381983e-e11c-477e-85b0-df124ae29b32-kube-api-access-z5l6w\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.797899 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4beb-account-create-update-dzhlc" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.894968 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hcv4\" (UniqueName: \"kubernetes.io/projected/0c024930-e403-4410-9f11-4c7bc67711cd-kube-api-access-2hcv4\") pod \"0c024930-e403-4410-9f11-4c7bc67711cd\" (UID: \"0c024930-e403-4410-9f11-4c7bc67711cd\") " Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.895121 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c024930-e403-4410-9f11-4c7bc67711cd-operator-scripts\") pod \"0c024930-e403-4410-9f11-4c7bc67711cd\" (UID: \"0c024930-e403-4410-9f11-4c7bc67711cd\") " Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.895877 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c024930-e403-4410-9f11-4c7bc67711cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c024930-e403-4410-9f11-4c7bc67711cd" (UID: "0c024930-e403-4410-9f11-4c7bc67711cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.906457 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c024930-e403-4410-9f11-4c7bc67711cd-kube-api-access-2hcv4" (OuterVolumeSpecName: "kube-api-access-2hcv4") pod "0c024930-e403-4410-9f11-4c7bc67711cd" (UID: "0c024930-e403-4410-9f11-4c7bc67711cd"). InnerVolumeSpecName "kube-api-access-2hcv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.990684 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4beb-account-create-update-dzhlc" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.991178 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4beb-account-create-update-dzhlc" event={"ID":"0c024930-e403-4410-9f11-4c7bc67711cd","Type":"ContainerDied","Data":"7dff789741730d678761532680f0d72cab625a78a2897da64835ca6d7f6f0e11"} Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.991216 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dff789741730d678761532680f0d72cab625a78a2897da64835ca6d7f6f0e11" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.993896 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jdpnl" event={"ID":"9381983e-e11c-477e-85b0-df124ae29b32","Type":"ContainerDied","Data":"b96c1fe0ee5f64c92f6110dc3b3353cf9346473123a2b8637a7b1d3ef465c0c7"} Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.993924 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jdpnl" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.993926 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b96c1fe0ee5f64c92f6110dc3b3353cf9346473123a2b8637a7b1d3ef465c0c7" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.997883 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hcv4\" (UniqueName: \"kubernetes.io/projected/0c024930-e403-4410-9f11-4c7bc67711cd-kube-api-access-2hcv4\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:52 crc kubenswrapper[5024]: I1128 17:24:52.997912 5024 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c024930-e403-4410-9f11-4c7bc67711cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:53 crc kubenswrapper[5024]: I1128 17:24:53.055410 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.009501 5024 generic.go:334] "Generic (PLEG): container finished" podID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerID="665af5b5bcc53c6c5bd3b3f56acdf863e1c6ee4ebb549976b673621475e62806" exitCode=137 Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.009556 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db9a796-d716-420d-9c0f-5ec9e4972585","Type":"ContainerDied","Data":"665af5b5bcc53c6c5bd3b3f56acdf863e1c6ee4ebb549976b673621475e62806"} Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.012371 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"06a9aedd-7e31-4c76-8ca8-65ede667175e","Type":"ContainerStarted","Data":"de596452cfc1fc7d830459308172952d92629901fd66eb6917ebce313d0165d0"} Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.790504 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-llhcr"] Nov 28 17:24:54 crc kubenswrapper[5024]: E1128 17:24:54.795064 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c024930-e403-4410-9f11-4c7bc67711cd" containerName="mariadb-account-create-update" Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.795109 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c024930-e403-4410-9f11-4c7bc67711cd" containerName="mariadb-account-create-update" Nov 28 17:24:54 crc kubenswrapper[5024]: E1128 17:24:54.795204 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9381983e-e11c-477e-85b0-df124ae29b32" containerName="mariadb-database-create" Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.795214 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9381983e-e11c-477e-85b0-df124ae29b32" containerName="mariadb-database-create" Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.795888 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c024930-e403-4410-9f11-4c7bc67711cd" containerName="mariadb-account-create-update" Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.795950 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="9381983e-e11c-477e-85b0-df124ae29b32" containerName="mariadb-database-create" Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.798075 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-llhcr" Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.801409 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.805538 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.805833 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.805953 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-rjjzq" Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.855127 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-llhcr"] Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.959876 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-config-data\") pod \"aodh-db-sync-llhcr\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " pod="openstack/aodh-db-sync-llhcr" Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.961082 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-combined-ca-bundle\") pod \"aodh-db-sync-llhcr\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " pod="openstack/aodh-db-sync-llhcr" Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.961357 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzq5p\" (UniqueName: \"kubernetes.io/projected/5f864af7-37e3-45ce-ba16-0a139c33831f-kube-api-access-vzq5p\") pod \"aodh-db-sync-llhcr\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " pod="openstack/aodh-db-sync-llhcr" Nov 28 17:24:54 crc kubenswrapper[5024]: I1128 17:24:54.961579 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-scripts\") pod \"aodh-db-sync-llhcr\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " pod="openstack/aodh-db-sync-llhcr" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.026565 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5db9a796-d716-420d-9c0f-5ec9e4972585","Type":"ContainerDied","Data":"562b7ca379d7799091a777a01364599a073ff3997cd33a431e117941233e755c"} Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.026911 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="562b7ca379d7799091a777a01364599a073ff3997cd33a431e117941233e755c" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.028380 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"06a9aedd-7e31-4c76-8ca8-65ede667175e","Type":"ContainerStarted","Data":"f64f4938f04ad7c0aae3dd08bfca9bf0d1350a496c97701ec4219c7887523f2c"} Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.028694 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.045579 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.04556019 podStartE2EDuration="3.04556019s" podCreationTimestamp="2025-11-28 17:24:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:24:55.042864882 +0000 UTC m=+1597.091785777" watchObservedRunningTime="2025-11-28 17:24:55.04556019 +0000 UTC m=+1597.094481095" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.065222 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-scripts\") pod \"aodh-db-sync-llhcr\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " pod="openstack/aodh-db-sync-llhcr" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.065307 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-config-data\") pod \"aodh-db-sync-llhcr\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " pod="openstack/aodh-db-sync-llhcr" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.065380 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-combined-ca-bundle\") pod \"aodh-db-sync-llhcr\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " pod="openstack/aodh-db-sync-llhcr" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.065498 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzq5p\" (UniqueName: \"kubernetes.io/projected/5f864af7-37e3-45ce-ba16-0a139c33831f-kube-api-access-vzq5p\") pod \"aodh-db-sync-llhcr\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " pod="openstack/aodh-db-sync-llhcr" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.071760 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-scripts\") pod \"aodh-db-sync-llhcr\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " pod="openstack/aodh-db-sync-llhcr" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.074587 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-config-data\") pod \"aodh-db-sync-llhcr\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " pod="openstack/aodh-db-sync-llhcr" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.089834 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzq5p\" (UniqueName: \"kubernetes.io/projected/5f864af7-37e3-45ce-ba16-0a139c33831f-kube-api-access-vzq5p\") pod \"aodh-db-sync-llhcr\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " pod="openstack/aodh-db-sync-llhcr" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.090678 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-combined-ca-bundle\") pod \"aodh-db-sync-llhcr\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " pod="openstack/aodh-db-sync-llhcr" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.124931 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-llhcr" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.257956 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.374961 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db9a796-d716-420d-9c0f-5ec9e4972585-run-httpd\") pod \"5db9a796-d716-420d-9c0f-5ec9e4972585\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.375425 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db9a796-d716-420d-9c0f-5ec9e4972585-log-httpd\") pod \"5db9a796-d716-420d-9c0f-5ec9e4972585\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.375488 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-sg-core-conf-yaml\") pod \"5db9a796-d716-420d-9c0f-5ec9e4972585\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.375577 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-config-data\") pod \"5db9a796-d716-420d-9c0f-5ec9e4972585\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.375655 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-scripts\") pod \"5db9a796-d716-420d-9c0f-5ec9e4972585\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.375705 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-combined-ca-bundle\") pod \"5db9a796-d716-420d-9c0f-5ec9e4972585\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.375791 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpwh2\" (UniqueName: \"kubernetes.io/projected/5db9a796-d716-420d-9c0f-5ec9e4972585-kube-api-access-qpwh2\") pod \"5db9a796-d716-420d-9c0f-5ec9e4972585\" (UID: \"5db9a796-d716-420d-9c0f-5ec9e4972585\") " Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.376330 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db9a796-d716-420d-9c0f-5ec9e4972585-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5db9a796-d716-420d-9c0f-5ec9e4972585" (UID: "5db9a796-d716-420d-9c0f-5ec9e4972585"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.376521 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db9a796-d716-420d-9c0f-5ec9e4972585-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.376976 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db9a796-d716-420d-9c0f-5ec9e4972585-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5db9a796-d716-420d-9c0f-5ec9e4972585" (UID: "5db9a796-d716-420d-9c0f-5ec9e4972585"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.384246 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-scripts" (OuterVolumeSpecName: "scripts") pod "5db9a796-d716-420d-9c0f-5ec9e4972585" (UID: "5db9a796-d716-420d-9c0f-5ec9e4972585"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.396453 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db9a796-d716-420d-9c0f-5ec9e4972585-kube-api-access-qpwh2" (OuterVolumeSpecName: "kube-api-access-qpwh2") pod "5db9a796-d716-420d-9c0f-5ec9e4972585" (UID: "5db9a796-d716-420d-9c0f-5ec9e4972585"). InnerVolumeSpecName "kube-api-access-qpwh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.420098 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5db9a796-d716-420d-9c0f-5ec9e4972585" (UID: "5db9a796-d716-420d-9c0f-5ec9e4972585"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.476331 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5db9a796-d716-420d-9c0f-5ec9e4972585" (UID: "5db9a796-d716-420d-9c0f-5ec9e4972585"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.480782 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.480866 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.480899 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpwh2\" (UniqueName: \"kubernetes.io/projected/5db9a796-d716-420d-9c0f-5ec9e4972585-kube-api-access-qpwh2\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.480908 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5db9a796-d716-420d-9c0f-5ec9e4972585-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.480957 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.566476 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-config-data" (OuterVolumeSpecName: "config-data") pod "5db9a796-d716-420d-9c0f-5ec9e4972585" (UID: "5db9a796-d716-420d-9c0f-5ec9e4972585"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.583665 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db9a796-d716-420d-9c0f-5ec9e4972585-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:55 crc kubenswrapper[5024]: I1128 17:24:55.648999 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-llhcr"] Nov 28 17:24:55 crc kubenswrapper[5024]: W1128 17:24:55.653775 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f864af7_37e3_45ce_ba16_0a139c33831f.slice/crio-d98924b75b564f163b4a24186e87f064888ce8024e2c08df994b952e284fd0a3 WatchSource:0}: Error finding container d98924b75b564f163b4a24186e87f064888ce8024e2c08df994b952e284fd0a3: Status 404 returned error can't find the container with id d98924b75b564f163b4a24186e87f064888ce8024e2c08df994b952e284fd0a3 Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.042248 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-llhcr" event={"ID":"5f864af7-37e3-45ce-ba16-0a139c33831f","Type":"ContainerStarted","Data":"d98924b75b564f163b4a24186e87f064888ce8024e2c08df994b952e284fd0a3"} Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.042320 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.096936 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.119684 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.157891 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:24:56 crc kubenswrapper[5024]: E1128 17:24:56.158839 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="ceilometer-central-agent" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.158863 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="ceilometer-central-agent" Nov 28 17:24:56 crc kubenswrapper[5024]: E1128 17:24:56.158888 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="ceilometer-notification-agent" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.158896 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="ceilometer-notification-agent" Nov 28 17:24:56 crc kubenswrapper[5024]: E1128 17:24:56.158912 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="sg-core" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.158975 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="sg-core" Nov 28 17:24:56 crc kubenswrapper[5024]: E1128 17:24:56.159010 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="proxy-httpd" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.159119 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="proxy-httpd" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.159501 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="proxy-httpd" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.159526 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="ceilometer-central-agent" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.159537 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="ceilometer-notification-agent" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.159558 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" containerName="sg-core" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.162938 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.168050 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.175861 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.217314 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.316546 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-scripts\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.316625 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.316686 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.316821 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed08872d-895b-4d55-bd98-9b12b32f28f6-run-httpd\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.316896 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed08872d-895b-4d55-bd98-9b12b32f28f6-log-httpd\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.316942 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt5hh\" (UniqueName: \"kubernetes.io/projected/ed08872d-895b-4d55-bd98-9b12b32f28f6-kube-api-access-pt5hh\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.316974 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-config-data\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.422138 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt5hh\" (UniqueName: \"kubernetes.io/projected/ed08872d-895b-4d55-bd98-9b12b32f28f6-kube-api-access-pt5hh\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.422200 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-config-data\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.422474 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-scripts\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.422549 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.422646 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.422883 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed08872d-895b-4d55-bd98-9b12b32f28f6-run-httpd\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.423001 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed08872d-895b-4d55-bd98-9b12b32f28f6-log-httpd\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.423554 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed08872d-895b-4d55-bd98-9b12b32f28f6-log-httpd\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.425384 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed08872d-895b-4d55-bd98-9b12b32f28f6-run-httpd\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.437210 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-config-data\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.437341 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.437355 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-scripts\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.439460 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.460914 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt5hh\" (UniqueName: \"kubernetes.io/projected/ed08872d-895b-4d55-bd98-9b12b32f28f6-kube-api-access-pt5hh\") pod \"ceilometer-0\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.483680 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:24:56 crc kubenswrapper[5024]: I1128 17:24:56.514141 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db9a796-d716-420d-9c0f-5ec9e4972585" path="/var/lib/kubelet/pods/5db9a796-d716-420d-9c0f-5ec9e4972585/volumes" Nov 28 17:24:57 crc kubenswrapper[5024]: W1128 17:24:57.067613 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded08872d_895b_4d55_bd98_9b12b32f28f6.slice/crio-06ca606169ed1dea03b7fb55e71a1fc2f6760ae9381205c659230490cbfa699d WatchSource:0}: Error finding container 06ca606169ed1dea03b7fb55e71a1fc2f6760ae9381205c659230490cbfa699d: Status 404 returned error can't find the container with id 06ca606169ed1dea03b7fb55e71a1fc2f6760ae9381205c659230490cbfa699d Nov 28 17:24:57 crc kubenswrapper[5024]: I1128 17:24:57.082729 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:24:58 crc kubenswrapper[5024]: I1128 17:24:58.072848 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed08872d-895b-4d55-bd98-9b12b32f28f6","Type":"ContainerStarted","Data":"06ca606169ed1dea03b7fb55e71a1fc2f6760ae9381205c659230490cbfa699d"} Nov 28 17:25:01 crc kubenswrapper[5024]: I1128 17:25:01.105958 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed08872d-895b-4d55-bd98-9b12b32f28f6","Type":"ContainerStarted","Data":"f1beefeec3de591612d593e2b9ec24ebd94db7e1f1f9bc77381259d9c24afe15"} Nov 28 17:25:02 crc kubenswrapper[5024]: I1128 17:25:02.126783 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-llhcr" event={"ID":"5f864af7-37e3-45ce-ba16-0a139c33831f","Type":"ContainerStarted","Data":"bbd3485a5e04b6e3609d8e627b803d0d59f21578919c16a37761ade5a617ed17"} Nov 28 17:25:02 crc kubenswrapper[5024]: I1128 17:25:02.130445 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed08872d-895b-4d55-bd98-9b12b32f28f6","Type":"ContainerStarted","Data":"ecf8122d5c400eb4351dc46db74fc1e2b69b6d56f346e6c155811f8f3c9fc775"} Nov 28 17:25:02 crc kubenswrapper[5024]: I1128 17:25:02.156740 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-llhcr" podStartSLOduration=2.924395266 podStartE2EDuration="8.156675065s" podCreationTimestamp="2025-11-28 17:24:54 +0000 UTC" firstStartedPulling="2025-11-28 17:24:55.657067844 +0000 UTC m=+1597.705988749" lastFinishedPulling="2025-11-28 17:25:00.889347643 +0000 UTC m=+1602.938268548" observedRunningTime="2025-11-28 17:25:02.146352589 +0000 UTC m=+1604.195273494" watchObservedRunningTime="2025-11-28 17:25:02.156675065 +0000 UTC m=+1604.205595970" Nov 28 17:25:02 crc kubenswrapper[5024]: I1128 17:25:02.526823 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.180146 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-b77gw"] Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.184067 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.192320 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.192724 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.221988 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-b77gw"] Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.298878 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-config-data\") pod \"nova-cell0-cell-mapping-b77gw\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.299042 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b77gw\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.299150 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-scripts\") pod \"nova-cell0-cell-mapping-b77gw\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.299479 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xflkn\" (UniqueName: \"kubernetes.io/projected/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-kube-api-access-xflkn\") pod \"nova-cell0-cell-mapping-b77gw\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.415451 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b77gw\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.415569 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-scripts\") pod \"nova-cell0-cell-mapping-b77gw\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.415718 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xflkn\" (UniqueName: \"kubernetes.io/projected/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-kube-api-access-xflkn\") pod \"nova-cell0-cell-mapping-b77gw\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.415830 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-config-data\") pod \"nova-cell0-cell-mapping-b77gw\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.429068 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b77gw\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.446731 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-config-data\") pod \"nova-cell0-cell-mapping-b77gw\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.453215 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-scripts\") pod \"nova-cell0-cell-mapping-b77gw\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.496610 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xflkn\" (UniqueName: \"kubernetes.io/projected/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-kube-api-access-xflkn\") pod \"nova-cell0-cell-mapping-b77gw\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.624452 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.627657 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.640566 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.644831 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.699460 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.715423 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.720421 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.745322 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-config-data\") pod \"nova-metadata-0\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " pod="openstack/nova-metadata-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.745392 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xzqq\" (UniqueName: \"kubernetes.io/projected/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-kube-api-access-4xzqq\") pod \"nova-metadata-0\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " pod="openstack/nova-metadata-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.745449 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-logs\") pod \"nova-metadata-0\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " pod="openstack/nova-metadata-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.745563 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " pod="openstack/nova-metadata-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.782554 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.790155 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.802726 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.833961 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.848190 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83db18c-461d-4602-a0e8-3f6506e931b4-config-data\") pod \"nova-api-0\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " pod="openstack/nova-api-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.848281 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83db18c-461d-4602-a0e8-3f6506e931b4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " pod="openstack/nova-api-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.848356 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83db18c-461d-4602-a0e8-3f6506e931b4-logs\") pod \"nova-api-0\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " pod="openstack/nova-api-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.848447 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbrjq\" (UniqueName: \"kubernetes.io/projected/d83db18c-461d-4602-a0e8-3f6506e931b4-kube-api-access-vbrjq\") pod \"nova-api-0\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " pod="openstack/nova-api-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.848530 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-config-data\") pod \"nova-metadata-0\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " pod="openstack/nova-metadata-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.848551 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xzqq\" (UniqueName: \"kubernetes.io/projected/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-kube-api-access-4xzqq\") pod \"nova-metadata-0\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " pod="openstack/nova-metadata-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.848611 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-logs\") pod \"nova-metadata-0\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " pod="openstack/nova-metadata-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.848701 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " pod="openstack/nova-metadata-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.855071 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-logs\") pod \"nova-metadata-0\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " pod="openstack/nova-metadata-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.860449 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.860955 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " pod="openstack/nova-metadata-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.874214 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-config-data\") pod \"nova-metadata-0\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " pod="openstack/nova-metadata-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.881679 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.912062 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xzqq\" (UniqueName: \"kubernetes.io/projected/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-kube-api-access-4xzqq\") pod \"nova-metadata-0\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " pod="openstack/nova-metadata-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.950722 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbrjq\" (UniqueName: \"kubernetes.io/projected/d83db18c-461d-4602-a0e8-3f6506e931b4-kube-api-access-vbrjq\") pod \"nova-api-0\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " pod="openstack/nova-api-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.957491 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813fefa2-4c39-465a-bf6a-5b2517cd1101-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"813fefa2-4c39-465a-bf6a-5b2517cd1101\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.957638 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813fefa2-4c39-465a-bf6a-5b2517cd1101-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"813fefa2-4c39-465a-bf6a-5b2517cd1101\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.957782 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpgpg\" (UniqueName: \"kubernetes.io/projected/813fefa2-4c39-465a-bf6a-5b2517cd1101-kube-api-access-cpgpg\") pod \"nova-cell1-novncproxy-0\" (UID: \"813fefa2-4c39-465a-bf6a-5b2517cd1101\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.958342 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83db18c-461d-4602-a0e8-3f6506e931b4-config-data\") pod \"nova-api-0\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " pod="openstack/nova-api-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.958526 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83db18c-461d-4602-a0e8-3f6506e931b4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " pod="openstack/nova-api-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.958727 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83db18c-461d-4602-a0e8-3f6506e931b4-logs\") pod \"nova-api-0\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " pod="openstack/nova-api-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.959626 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83db18c-461d-4602-a0e8-3f6506e931b4-logs\") pod \"nova-api-0\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " pod="openstack/nova-api-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.977505 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:25:03 crc kubenswrapper[5024]: I1128 17:25:03.998347 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83db18c-461d-4602-a0e8-3f6506e931b4-config-data\") pod \"nova-api-0\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " pod="openstack/nova-api-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.009150 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83db18c-461d-4602-a0e8-3f6506e931b4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " pod="openstack/nova-api-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.073490 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813fefa2-4c39-465a-bf6a-5b2517cd1101-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"813fefa2-4c39-465a-bf6a-5b2517cd1101\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.073549 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813fefa2-4c39-465a-bf6a-5b2517cd1101-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"813fefa2-4c39-465a-bf6a-5b2517cd1101\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.073586 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpgpg\" (UniqueName: \"kubernetes.io/projected/813fefa2-4c39-465a-bf6a-5b2517cd1101-kube-api-access-cpgpg\") pod \"nova-cell1-novncproxy-0\" (UID: \"813fefa2-4c39-465a-bf6a-5b2517cd1101\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.075908 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbrjq\" (UniqueName: \"kubernetes.io/projected/d83db18c-461d-4602-a0e8-3f6506e931b4-kube-api-access-vbrjq\") pod \"nova-api-0\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " pod="openstack/nova-api-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.083131 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.099821 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813fefa2-4c39-465a-bf6a-5b2517cd1101-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"813fefa2-4c39-465a-bf6a-5b2517cd1101\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.102273 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813fefa2-4c39-465a-bf6a-5b2517cd1101-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"813fefa2-4c39-465a-bf6a-5b2517cd1101\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.111096 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7kctn"] Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.113275 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.135007 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpgpg\" (UniqueName: \"kubernetes.io/projected/813fefa2-4c39-465a-bf6a-5b2517cd1101-kube-api-access-cpgpg\") pod \"nova-cell1-novncproxy-0\" (UID: \"813fefa2-4c39-465a-bf6a-5b2517cd1101\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.145672 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.163973 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.166282 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.170208 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.230426 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7kctn"] Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.268608 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.336449 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-dns-svc\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.336557 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5krp2\" (UniqueName: \"kubernetes.io/projected/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-kube-api-access-5krp2\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.336655 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.336710 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v84l4\" (UniqueName: \"kubernetes.io/projected/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-kube-api-access-v84l4\") pod \"nova-scheduler-0\" (UID: \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.336780 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.336807 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-config-data\") pod \"nova-scheduler-0\" (UID: \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.336829 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.336865 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-config\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.336891 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.421325 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed08872d-895b-4d55-bd98-9b12b32f28f6","Type":"ContainerStarted","Data":"66157e9b720c6adc08d61f04523724c88b516e6cfd4d2e86f15195c7f6fb49b7"} Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.439163 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.439512 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v84l4\" (UniqueName: \"kubernetes.io/projected/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-kube-api-access-v84l4\") pod \"nova-scheduler-0\" (UID: \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.439648 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.439683 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-config-data\") pod \"nova-scheduler-0\" (UID: \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.439703 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.439748 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-config\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.439775 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.439916 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-dns-svc\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.440049 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5krp2\" (UniqueName: \"kubernetes.io/projected/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-kube-api-access-5krp2\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.440076 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.440603 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.440958 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-dns-svc\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.442111 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.465107 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.475341 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-config\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.493272 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-config-data\") pod \"nova-scheduler-0\" (UID: \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.499667 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v84l4\" (UniqueName: \"kubernetes.io/projected/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-kube-api-access-v84l4\") pod \"nova-scheduler-0\" (UID: \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.500462 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5krp2\" (UniqueName: \"kubernetes.io/projected/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-kube-api-access-5krp2\") pod \"dnsmasq-dns-9b86998b5-7kctn\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.512368 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.537873 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:25:04 crc kubenswrapper[5024]: I1128 17:25:04.944343 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-b77gw"] Nov 28 17:25:04 crc kubenswrapper[5024]: W1128 17:25:04.955748 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode179cfb9_4f0a_4b45_9c10_ad14432d7fc4.slice/crio-f4cb110daa5792e146860b2bc045698f9fc8e62c8f72c9f6121320e03edfd41d WatchSource:0}: Error finding container f4cb110daa5792e146860b2bc045698f9fc8e62c8f72c9f6121320e03edfd41d: Status 404 returned error can't find the container with id f4cb110daa5792e146860b2bc045698f9fc8e62c8f72c9f6121320e03edfd41d Nov 28 17:25:05 crc kubenswrapper[5024]: I1128 17:25:05.442476 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b77gw" event={"ID":"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4","Type":"ContainerStarted","Data":"e69794901f3ecc2a3703449de30328242032fbe02e17b29237a656216d3fd946"} Nov 28 17:25:05 crc kubenswrapper[5024]: I1128 17:25:05.442795 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b77gw" event={"ID":"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4","Type":"ContainerStarted","Data":"f4cb110daa5792e146860b2bc045698f9fc8e62c8f72c9f6121320e03edfd41d"} Nov 28 17:25:05 crc kubenswrapper[5024]: I1128 17:25:05.476167 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-b77gw" podStartSLOduration=2.476141711 podStartE2EDuration="2.476141711s" podCreationTimestamp="2025-11-28 17:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:25:05.469173911 +0000 UTC m=+1607.518094816" watchObservedRunningTime="2025-11-28 17:25:05.476141711 +0000 UTC m=+1607.525062616" Nov 28 17:25:05 crc kubenswrapper[5024]: I1128 17:25:05.768207 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:05 crc kubenswrapper[5024]: I1128 17:25:05.798592 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:25:05 crc kubenswrapper[5024]: I1128 17:25:05.823523 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.129347 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.213254 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7kctn"] Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.345178 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j7k6k"] Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.346933 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.349634 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.349958 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.362802 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j7k6k"] Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.529248 5024 generic.go:334] "Generic (PLEG): container finished" podID="5f864af7-37e3-45ce-ba16-0a139c33831f" containerID="bbd3485a5e04b6e3609d8e627b803d0d59f21578919c16a37761ade5a617ed17" exitCode=0 Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.560849 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-config-data\") pod \"nova-cell1-conductor-db-sync-j7k6k\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.561247 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-scripts\") pod \"nova-cell1-conductor-db-sync-j7k6k\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.561491 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whrqr\" (UniqueName: \"kubernetes.io/projected/41c514f3-354d-4254-aea0-821b23140252-kube-api-access-whrqr\") pod \"nova-cell1-conductor-db-sync-j7k6k\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.561657 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-j7k6k\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.562857 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0","Type":"ContainerStarted","Data":"da6540dab0dbd906acd43405039413c39030ba7ac8fe65947916145aee91323b"} Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.562938 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-llhcr" event={"ID":"5f864af7-37e3-45ce-ba16-0a139c33831f","Type":"ContainerDied","Data":"bbd3485a5e04b6e3609d8e627b803d0d59f21578919c16a37761ade5a617ed17"} Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.562959 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f","Type":"ContainerStarted","Data":"d9cc2a71cbe5f36048bec477cf464eb397fbef89451a52576d66ebb9449b14db"} Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.562974 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7kctn" event={"ID":"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e","Type":"ContainerStarted","Data":"eded1712a97a186a9876a31da090c87c2f54191b6240668acd60003e71ed0861"} Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.563001 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d83db18c-461d-4602-a0e8-3f6506e931b4","Type":"ContainerStarted","Data":"be18fe70e3570ac7bdc98b54c5440027d86055ff183a2e58ef43a6525c212833"} Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.563031 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"813fefa2-4c39-465a-bf6a-5b2517cd1101","Type":"ContainerStarted","Data":"d16f0481a5db854726f21dbbbc03bd6bf3d4c0907238d0588ee55e9ee4ecd6ee"} Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.664480 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-config-data\") pod \"nova-cell1-conductor-db-sync-j7k6k\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.665196 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-scripts\") pod \"nova-cell1-conductor-db-sync-j7k6k\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.665500 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whrqr\" (UniqueName: \"kubernetes.io/projected/41c514f3-354d-4254-aea0-821b23140252-kube-api-access-whrqr\") pod \"nova-cell1-conductor-db-sync-j7k6k\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.665720 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-j7k6k\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.678801 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-scripts\") pod \"nova-cell1-conductor-db-sync-j7k6k\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.679455 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-config-data\") pod \"nova-cell1-conductor-db-sync-j7k6k\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.680169 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-j7k6k\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.686628 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whrqr\" (UniqueName: \"kubernetes.io/projected/41c514f3-354d-4254-aea0-821b23140252-kube-api-access-whrqr\") pod \"nova-cell1-conductor-db-sync-j7k6k\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:06 crc kubenswrapper[5024]: I1128 17:25:06.747910 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:07 crc kubenswrapper[5024]: I1128 17:25:07.548484 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j7k6k"] Nov 28 17:25:07 crc kubenswrapper[5024]: I1128 17:25:07.565179 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:25:07 crc kubenswrapper[5024]: I1128 17:25:07.565232 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:25:07 crc kubenswrapper[5024]: I1128 17:25:07.565289 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 17:25:07 crc kubenswrapper[5024]: I1128 17:25:07.569125 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:25:07 crc kubenswrapper[5024]: I1128 17:25:07.569219 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" gracePeriod=600 Nov 28 17:25:07 crc kubenswrapper[5024]: I1128 17:25:07.588097 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j7k6k" event={"ID":"41c514f3-354d-4254-aea0-821b23140252","Type":"ContainerStarted","Data":"bb760cdbdb8cea3cf3fcceeb0dec862fa33d523bb09db8cc12737e6286f62b68"} Nov 28 17:25:07 crc kubenswrapper[5024]: I1128 17:25:07.590656 5024 generic.go:334] "Generic (PLEG): container finished" podID="4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" containerID="c909157eb0fc792216efc46466096aee63e8aa5f7798c1d7d43588b0de9d93f2" exitCode=0 Nov 28 17:25:07 crc kubenswrapper[5024]: I1128 17:25:07.590903 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7kctn" event={"ID":"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e","Type":"ContainerDied","Data":"c909157eb0fc792216efc46466096aee63e8aa5f7798c1d7d43588b0de9d93f2"} Nov 28 17:25:07 crc kubenswrapper[5024]: I1128 17:25:07.606850 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed08872d-895b-4d55-bd98-9b12b32f28f6","Type":"ContainerStarted","Data":"eb777a277126b502fbf716851015023f6f9f18450f2140d4020a47510596228e"} Nov 28 17:25:07 crc kubenswrapper[5024]: I1128 17:25:07.607966 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:25:07 crc kubenswrapper[5024]: I1128 17:25:07.682852 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.663751285 podStartE2EDuration="11.682833064s" podCreationTimestamp="2025-11-28 17:24:56 +0000 UTC" firstStartedPulling="2025-11-28 17:24:57.083197447 +0000 UTC m=+1599.132118352" lastFinishedPulling="2025-11-28 17:25:06.102279226 +0000 UTC m=+1608.151200131" observedRunningTime="2025-11-28 17:25:07.670306465 +0000 UTC m=+1609.719227370" watchObservedRunningTime="2025-11-28 17:25:07.682833064 +0000 UTC m=+1609.731753969" Nov 28 17:25:07 crc kubenswrapper[5024]: E1128 17:25:07.731325 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.103293 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.109408 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.338691 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-llhcr" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.439729 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-scripts\") pod \"5f864af7-37e3-45ce-ba16-0a139c33831f\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.439871 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-combined-ca-bundle\") pod \"5f864af7-37e3-45ce-ba16-0a139c33831f\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.440047 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-config-data\") pod \"5f864af7-37e3-45ce-ba16-0a139c33831f\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.440177 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzq5p\" (UniqueName: \"kubernetes.io/projected/5f864af7-37e3-45ce-ba16-0a139c33831f-kube-api-access-vzq5p\") pod \"5f864af7-37e3-45ce-ba16-0a139c33831f\" (UID: \"5f864af7-37e3-45ce-ba16-0a139c33831f\") " Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.449851 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f864af7-37e3-45ce-ba16-0a139c33831f-kube-api-access-vzq5p" (OuterVolumeSpecName: "kube-api-access-vzq5p") pod "5f864af7-37e3-45ce-ba16-0a139c33831f" (UID: "5f864af7-37e3-45ce-ba16-0a139c33831f"). InnerVolumeSpecName "kube-api-access-vzq5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.452152 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-scripts" (OuterVolumeSpecName: "scripts") pod "5f864af7-37e3-45ce-ba16-0a139c33831f" (UID: "5f864af7-37e3-45ce-ba16-0a139c33831f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.487571 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f864af7-37e3-45ce-ba16-0a139c33831f" (UID: "5f864af7-37e3-45ce-ba16-0a139c33831f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.542726 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.542770 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.542779 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzq5p\" (UniqueName: \"kubernetes.io/projected/5f864af7-37e3-45ce-ba16-0a139c33831f-kube-api-access-vzq5p\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.584400 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-config-data" (OuterVolumeSpecName: "config-data") pod "5f864af7-37e3-45ce-ba16-0a139c33831f" (UID: "5f864af7-37e3-45ce-ba16-0a139c33831f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.657453 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f864af7-37e3-45ce-ba16-0a139c33831f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.658848 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-llhcr" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.659127 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-llhcr" event={"ID":"5f864af7-37e3-45ce-ba16-0a139c33831f","Type":"ContainerDied","Data":"d98924b75b564f163b4a24186e87f064888ce8024e2c08df994b952e284fd0a3"} Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.659223 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d98924b75b564f163b4a24186e87f064888ce8024e2c08df994b952e284fd0a3" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.684210 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j7k6k" event={"ID":"41c514f3-354d-4254-aea0-821b23140252","Type":"ContainerStarted","Data":"705d6097270b28f54efe431dc19d441a16e92988d96d63d9b1a3847adc062c0a"} Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.711449 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7kctn" event={"ID":"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e","Type":"ContainerStarted","Data":"aa7ec8dac83a7805f245b77a7995cd4d89452a3fd3b858fed5f1c591450dd90c"} Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.713045 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.715981 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-j7k6k" podStartSLOduration=2.715935509 podStartE2EDuration="2.715935509s" podCreationTimestamp="2025-11-28 17:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:25:08.709072862 +0000 UTC m=+1610.757993767" watchObservedRunningTime="2025-11-28 17:25:08.715935509 +0000 UTC m=+1610.764856414" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.726252 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" exitCode=0 Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.727531 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b"} Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.727571 5024 scope.go:117] "RemoveContainer" containerID="c14bd832feb4db8425d0f1a45e06a6d0b13d8ee68a565113d9375a7e774e72b0" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.728046 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:25:10 crc kubenswrapper[5024]: E1128 17:25:08.728337 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:08.748304 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-7kctn" podStartSLOduration=5.748283088 podStartE2EDuration="5.748283088s" podCreationTimestamp="2025-11-28 17:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:25:08.740503025 +0000 UTC m=+1610.789423930" watchObservedRunningTime="2025-11-28 17:25:08.748283088 +0000 UTC m=+1610.797203993" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.468229 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 28 17:25:10 crc kubenswrapper[5024]: E1128 17:25:09.470117 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f864af7-37e3-45ce-ba16-0a139c33831f" containerName="aodh-db-sync" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.470137 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f864af7-37e3-45ce-ba16-0a139c33831f" containerName="aodh-db-sync" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.470433 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f864af7-37e3-45ce-ba16-0a139c33831f" containerName="aodh-db-sync" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.474988 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.486538 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.486761 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-rjjzq" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.487867 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.498470 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.612960 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-scripts\") pod \"aodh-0\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " pod="openstack/aodh-0" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.613118 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-config-data\") pod \"aodh-0\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " pod="openstack/aodh-0" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.613149 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " pod="openstack/aodh-0" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.613201 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28c2g\" (UniqueName: \"kubernetes.io/projected/8e273701-bfd9-47a7-801f-79587c45b401-kube-api-access-28c2g\") pod \"aodh-0\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " pod="openstack/aodh-0" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.715778 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-scripts\") pod \"aodh-0\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " pod="openstack/aodh-0" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.715916 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-config-data\") pod \"aodh-0\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " pod="openstack/aodh-0" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.715959 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " pod="openstack/aodh-0" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.715994 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28c2g\" (UniqueName: \"kubernetes.io/projected/8e273701-bfd9-47a7-801f-79587c45b401-kube-api-access-28c2g\") pod \"aodh-0\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " pod="openstack/aodh-0" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.809747 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-scripts\") pod \"aodh-0\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " pod="openstack/aodh-0" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.810607 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-config-data\") pod \"aodh-0\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " pod="openstack/aodh-0" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.811213 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " pod="openstack/aodh-0" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.813126 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28c2g\" (UniqueName: \"kubernetes.io/projected/8e273701-bfd9-47a7-801f-79587c45b401-kube-api-access-28c2g\") pod \"aodh-0\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " pod="openstack/aodh-0" Nov 28 17:25:10 crc kubenswrapper[5024]: I1128 17:25:09.834050 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 17:25:13 crc kubenswrapper[5024]: I1128 17:25:13.010732 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 17:25:13 crc kubenswrapper[5024]: I1128 17:25:13.681011 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:13 crc kubenswrapper[5024]: I1128 17:25:13.687501 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="ceilometer-central-agent" containerID="cri-o://f1beefeec3de591612d593e2b9ec24ebd94db7e1f1f9bc77381259d9c24afe15" gracePeriod=30 Nov 28 17:25:13 crc kubenswrapper[5024]: I1128 17:25:13.688077 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="proxy-httpd" containerID="cri-o://eb777a277126b502fbf716851015023f6f9f18450f2140d4020a47510596228e" gracePeriod=30 Nov 28 17:25:13 crc kubenswrapper[5024]: I1128 17:25:13.688159 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="sg-core" containerID="cri-o://66157e9b720c6adc08d61f04523724c88b516e6cfd4d2e86f15195c7f6fb49b7" gracePeriod=30 Nov 28 17:25:13 crc kubenswrapper[5024]: I1128 17:25:13.688229 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="ceilometer-notification-agent" containerID="cri-o://ecf8122d5c400eb4351dc46db74fc1e2b69b6d56f346e6c155811f8f3c9fc775" gracePeriod=30 Nov 28 17:25:13 crc kubenswrapper[5024]: I1128 17:25:13.921196 5024 generic.go:334] "Generic (PLEG): container finished" podID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerID="66157e9b720c6adc08d61f04523724c88b516e6cfd4d2e86f15195c7f6fb49b7" exitCode=2 Nov 28 17:25:13 crc kubenswrapper[5024]: I1128 17:25:13.921266 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed08872d-895b-4d55-bd98-9b12b32f28f6","Type":"ContainerDied","Data":"66157e9b720c6adc08d61f04523724c88b516e6cfd4d2e86f15195c7f6fb49b7"} Nov 28 17:25:14 crc kubenswrapper[5024]: I1128 17:25:14.116543 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 17:25:14 crc kubenswrapper[5024]: I1128 17:25:14.547515 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:14 crc kubenswrapper[5024]: I1128 17:25:14.682915 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-84gfm"] Nov 28 17:25:14 crc kubenswrapper[5024]: I1128 17:25:14.683438 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" podUID="8494f18f-160a-41ae-802d-a490037f0aec" containerName="dnsmasq-dns" containerID="cri-o://b5051948f72205fd2db1b4694a7407db7fdbc5f6d2f4c0dbd1899cd0ebad0e8c" gracePeriod=10 Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.027372 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f","Type":"ContainerStarted","Data":"0c93f45552828c1a71680441d5d5cceccba04134856cb3be3590076385b9ebf5"} Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.027697 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f","Type":"ContainerStarted","Data":"a8654b9fba746666255220cfce1c94ebfa3b5ebf7524e0146a08b4aeeb8261ff"} Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.027851 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" containerName="nova-metadata-log" containerID="cri-o://a8654b9fba746666255220cfce1c94ebfa3b5ebf7524e0146a08b4aeeb8261ff" gracePeriod=30 Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.028139 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" containerName="nova-metadata-metadata" containerID="cri-o://0c93f45552828c1a71680441d5d5cceccba04134856cb3be3590076385b9ebf5" gracePeriod=30 Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.066758 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d83db18c-461d-4602-a0e8-3f6506e931b4","Type":"ContainerStarted","Data":"1f6bb472ddee69d36d1d1c7728cdc50a464f4b085a09e4f47e4abdc92859dd5d"} Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.066809 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d83db18c-461d-4602-a0e8-3f6506e931b4","Type":"ContainerStarted","Data":"e5e764fbc5dd3faa51f5a706dca0f8fd67189b0404f57c75fcad2e5de253e3b7"} Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.068484 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8e273701-bfd9-47a7-801f-79587c45b401","Type":"ContainerStarted","Data":"fc82a364a8609e0fc95e9bed35e6ec70c1af83bec6a9d797fd398eaaeeac3848"} Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.096706 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"813fefa2-4c39-465a-bf6a-5b2517cd1101","Type":"ContainerStarted","Data":"e53c2f40c52f3e1a783029846d8a1f534a416fb19302e8176450f34ad4d8e1c1"} Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.096853 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="813fefa2-4c39-465a-bf6a-5b2517cd1101" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e53c2f40c52f3e1a783029846d8a1f534a416fb19302e8176450f34ad4d8e1c1" gracePeriod=30 Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.100382 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.419347798 podStartE2EDuration="12.100362311s" podCreationTimestamp="2025-11-28 17:25:03 +0000 UTC" firstStartedPulling="2025-11-28 17:25:05.827854364 +0000 UTC m=+1607.876775269" lastFinishedPulling="2025-11-28 17:25:13.508868877 +0000 UTC m=+1615.557789782" observedRunningTime="2025-11-28 17:25:15.0954575 +0000 UTC m=+1617.144378405" watchObservedRunningTime="2025-11-28 17:25:15.100362311 +0000 UTC m=+1617.149283216" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.159480 5024 generic.go:334] "Generic (PLEG): container finished" podID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerID="eb777a277126b502fbf716851015023f6f9f18450f2140d4020a47510596228e" exitCode=0 Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.159517 5024 generic.go:334] "Generic (PLEG): container finished" podID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerID="ecf8122d5c400eb4351dc46db74fc1e2b69b6d56f346e6c155811f8f3c9fc775" exitCode=0 Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.159686 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed08872d-895b-4d55-bd98-9b12b32f28f6","Type":"ContainerDied","Data":"eb777a277126b502fbf716851015023f6f9f18450f2140d4020a47510596228e"} Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.159715 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed08872d-895b-4d55-bd98-9b12b32f28f6","Type":"ContainerDied","Data":"ecf8122d5c400eb4351dc46db74fc1e2b69b6d56f346e6c155811f8f3c9fc775"} Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.162140 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.425203186 podStartE2EDuration="12.162119985s" podCreationTimestamp="2025-11-28 17:25:03 +0000 UTC" firstStartedPulling="2025-11-28 17:25:05.772983308 +0000 UTC m=+1607.821904213" lastFinishedPulling="2025-11-28 17:25:13.509900107 +0000 UTC m=+1615.558821012" observedRunningTime="2025-11-28 17:25:15.159984434 +0000 UTC m=+1617.208905339" watchObservedRunningTime="2025-11-28 17:25:15.162119985 +0000 UTC m=+1617.211040890" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.172633 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0","Type":"ContainerStarted","Data":"2002ba8740148d9123b89de89794cc3e44a4cb9799f754198e92802b1e76b3c0"} Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.177963 5024 generic.go:334] "Generic (PLEG): container finished" podID="8494f18f-160a-41ae-802d-a490037f0aec" containerID="b5051948f72205fd2db1b4694a7407db7fdbc5f6d2f4c0dbd1899cd0ebad0e8c" exitCode=0 Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.178011 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" event={"ID":"8494f18f-160a-41ae-802d-a490037f0aec","Type":"ContainerDied","Data":"b5051948f72205fd2db1b4694a7407db7fdbc5f6d2f4c0dbd1899cd0ebad0e8c"} Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.260886 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.534194595 podStartE2EDuration="12.260862301s" podCreationTimestamp="2025-11-28 17:25:03 +0000 UTC" firstStartedPulling="2025-11-28 17:25:05.783212521 +0000 UTC m=+1607.832133426" lastFinishedPulling="2025-11-28 17:25:13.509880227 +0000 UTC m=+1615.558801132" observedRunningTime="2025-11-28 17:25:15.185531937 +0000 UTC m=+1617.234452842" watchObservedRunningTime="2025-11-28 17:25:15.260862301 +0000 UTC m=+1617.309783206" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.310048 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.991476141 podStartE2EDuration="12.310012823s" podCreationTimestamp="2025-11-28 17:25:03 +0000 UTC" firstStartedPulling="2025-11-28 17:25:06.190330985 +0000 UTC m=+1608.239251880" lastFinishedPulling="2025-11-28 17:25:13.508867657 +0000 UTC m=+1615.557788562" observedRunningTime="2025-11-28 17:25:15.226641988 +0000 UTC m=+1617.275562893" watchObservedRunningTime="2025-11-28 17:25:15.310012823 +0000 UTC m=+1617.358933728" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.643763 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.708594 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hdlq\" (UniqueName: \"kubernetes.io/projected/8494f18f-160a-41ae-802d-a490037f0aec-kube-api-access-5hdlq\") pod \"8494f18f-160a-41ae-802d-a490037f0aec\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.708755 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-config\") pod \"8494f18f-160a-41ae-802d-a490037f0aec\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.708870 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-ovsdbserver-nb\") pod \"8494f18f-160a-41ae-802d-a490037f0aec\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.708902 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-ovsdbserver-sb\") pod \"8494f18f-160a-41ae-802d-a490037f0aec\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.708928 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-dns-swift-storage-0\") pod \"8494f18f-160a-41ae-802d-a490037f0aec\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.709001 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-dns-svc\") pod \"8494f18f-160a-41ae-802d-a490037f0aec\" (UID: \"8494f18f-160a-41ae-802d-a490037f0aec\") " Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.714860 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8494f18f-160a-41ae-802d-a490037f0aec-kube-api-access-5hdlq" (OuterVolumeSpecName: "kube-api-access-5hdlq") pod "8494f18f-160a-41ae-802d-a490037f0aec" (UID: "8494f18f-160a-41ae-802d-a490037f0aec"). InnerVolumeSpecName "kube-api-access-5hdlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.791487 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8494f18f-160a-41ae-802d-a490037f0aec" (UID: "8494f18f-160a-41ae-802d-a490037f0aec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.801521 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8494f18f-160a-41ae-802d-a490037f0aec" (UID: "8494f18f-160a-41ae-802d-a490037f0aec"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.802738 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-config" (OuterVolumeSpecName: "config") pod "8494f18f-160a-41ae-802d-a490037f0aec" (UID: "8494f18f-160a-41ae-802d-a490037f0aec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.804820 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8494f18f-160a-41ae-802d-a490037f0aec" (UID: "8494f18f-160a-41ae-802d-a490037f0aec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.811831 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.811865 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hdlq\" (UniqueName: \"kubernetes.io/projected/8494f18f-160a-41ae-802d-a490037f0aec-kube-api-access-5hdlq\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.811875 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.811883 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.811893 5024 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.836670 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8494f18f-160a-41ae-802d-a490037f0aec" (UID: "8494f18f-160a-41ae-802d-a490037f0aec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:25:15 crc kubenswrapper[5024]: I1128 17:25:15.913934 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8494f18f-160a-41ae-802d-a490037f0aec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:16 crc kubenswrapper[5024]: I1128 17:25:16.193822 5024 generic.go:334] "Generic (PLEG): container finished" podID="6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" containerID="a8654b9fba746666255220cfce1c94ebfa3b5ebf7524e0146a08b4aeeb8261ff" exitCode=143 Nov 28 17:25:16 crc kubenswrapper[5024]: I1128 17:25:16.194008 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f","Type":"ContainerDied","Data":"a8654b9fba746666255220cfce1c94ebfa3b5ebf7524e0146a08b4aeeb8261ff"} Nov 28 17:25:16 crc kubenswrapper[5024]: I1128 17:25:16.197510 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8e273701-bfd9-47a7-801f-79587c45b401","Type":"ContainerStarted","Data":"d0ce6d04ef261f7ab68ab98250d111df1f840840ecdfedb94ac5b910bd19a99f"} Nov 28 17:25:16 crc kubenswrapper[5024]: I1128 17:25:16.201750 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" Nov 28 17:25:16 crc kubenswrapper[5024]: I1128 17:25:16.201797 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-84gfm" event={"ID":"8494f18f-160a-41ae-802d-a490037f0aec","Type":"ContainerDied","Data":"adbeadb7906ab8d9e462ca8df638ed977cb206a1fb1ae73a2c9be6bdb8293a57"} Nov 28 17:25:16 crc kubenswrapper[5024]: I1128 17:25:16.201827 5024 scope.go:117] "RemoveContainer" containerID="b5051948f72205fd2db1b4694a7407db7fdbc5f6d2f4c0dbd1899cd0ebad0e8c" Nov 28 17:25:16 crc kubenswrapper[5024]: I1128 17:25:16.320340 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-84gfm"] Nov 28 17:25:16 crc kubenswrapper[5024]: I1128 17:25:16.322183 5024 scope.go:117] "RemoveContainer" containerID="18b98c970ad1d279ab9432575442e5c4992bde3230386311bda136dc0b571473" Nov 28 17:25:16 crc kubenswrapper[5024]: I1128 17:25:16.336288 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-84gfm"] Nov 28 17:25:16 crc kubenswrapper[5024]: I1128 17:25:16.565214 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8494f18f-160a-41ae-802d-a490037f0aec" path="/var/lib/kubelet/pods/8494f18f-160a-41ae-802d-a490037f0aec/volumes" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.157601 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.197884 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-sg-core-conf-yaml\") pod \"ed08872d-895b-4d55-bd98-9b12b32f28f6\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.197957 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-combined-ca-bundle\") pod \"ed08872d-895b-4d55-bd98-9b12b32f28f6\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.197985 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-config-data\") pod \"ed08872d-895b-4d55-bd98-9b12b32f28f6\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.198085 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed08872d-895b-4d55-bd98-9b12b32f28f6-log-httpd\") pod \"ed08872d-895b-4d55-bd98-9b12b32f28f6\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.198196 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-scripts\") pod \"ed08872d-895b-4d55-bd98-9b12b32f28f6\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.198250 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed08872d-895b-4d55-bd98-9b12b32f28f6-run-httpd\") pod \"ed08872d-895b-4d55-bd98-9b12b32f28f6\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.198326 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt5hh\" (UniqueName: \"kubernetes.io/projected/ed08872d-895b-4d55-bd98-9b12b32f28f6-kube-api-access-pt5hh\") pod \"ed08872d-895b-4d55-bd98-9b12b32f28f6\" (UID: \"ed08872d-895b-4d55-bd98-9b12b32f28f6\") " Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.205781 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed08872d-895b-4d55-bd98-9b12b32f28f6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ed08872d-895b-4d55-bd98-9b12b32f28f6" (UID: "ed08872d-895b-4d55-bd98-9b12b32f28f6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.205798 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed08872d-895b-4d55-bd98-9b12b32f28f6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ed08872d-895b-4d55-bd98-9b12b32f28f6" (UID: "ed08872d-895b-4d55-bd98-9b12b32f28f6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.216908 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-scripts" (OuterVolumeSpecName: "scripts") pod "ed08872d-895b-4d55-bd98-9b12b32f28f6" (UID: "ed08872d-895b-4d55-bd98-9b12b32f28f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.222198 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed08872d-895b-4d55-bd98-9b12b32f28f6-kube-api-access-pt5hh" (OuterVolumeSpecName: "kube-api-access-pt5hh") pod "ed08872d-895b-4d55-bd98-9b12b32f28f6" (UID: "ed08872d-895b-4d55-bd98-9b12b32f28f6"). InnerVolumeSpecName "kube-api-access-pt5hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.240793 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.241178 5024 generic.go:334] "Generic (PLEG): container finished" podID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerID="f1beefeec3de591612d593e2b9ec24ebd94db7e1f1f9bc77381259d9c24afe15" exitCode=0 Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.241218 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed08872d-895b-4d55-bd98-9b12b32f28f6","Type":"ContainerDied","Data":"f1beefeec3de591612d593e2b9ec24ebd94db7e1f1f9bc77381259d9c24afe15"} Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.241394 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed08872d-895b-4d55-bd98-9b12b32f28f6","Type":"ContainerDied","Data":"06ca606169ed1dea03b7fb55e71a1fc2f6760ae9381205c659230490cbfa699d"} Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.241445 5024 scope.go:117] "RemoveContainer" containerID="eb777a277126b502fbf716851015023f6f9f18450f2140d4020a47510596228e" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.267915 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ed08872d-895b-4d55-bd98-9b12b32f28f6" (UID: "ed08872d-895b-4d55-bd98-9b12b32f28f6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.301425 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt5hh\" (UniqueName: \"kubernetes.io/projected/ed08872d-895b-4d55-bd98-9b12b32f28f6-kube-api-access-pt5hh\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.301469 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.301485 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed08872d-895b-4d55-bd98-9b12b32f28f6-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.301501 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.301513 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed08872d-895b-4d55-bd98-9b12b32f28f6-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.318498 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed08872d-895b-4d55-bd98-9b12b32f28f6" (UID: "ed08872d-895b-4d55-bd98-9b12b32f28f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.343449 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-config-data" (OuterVolumeSpecName: "config-data") pod "ed08872d-895b-4d55-bd98-9b12b32f28f6" (UID: "ed08872d-895b-4d55-bd98-9b12b32f28f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.402766 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.402800 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed08872d-895b-4d55-bd98-9b12b32f28f6-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.404380 5024 scope.go:117] "RemoveContainer" containerID="66157e9b720c6adc08d61f04523724c88b516e6cfd4d2e86f15195c7f6fb49b7" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.427490 5024 scope.go:117] "RemoveContainer" containerID="ecf8122d5c400eb4351dc46db74fc1e2b69b6d56f346e6c155811f8f3c9fc775" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.453855 5024 scope.go:117] "RemoveContainer" containerID="f1beefeec3de591612d593e2b9ec24ebd94db7e1f1f9bc77381259d9c24afe15" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.482141 5024 scope.go:117] "RemoveContainer" containerID="eb777a277126b502fbf716851015023f6f9f18450f2140d4020a47510596228e" Nov 28 17:25:17 crc kubenswrapper[5024]: E1128 17:25:17.482622 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb777a277126b502fbf716851015023f6f9f18450f2140d4020a47510596228e\": container with ID starting with eb777a277126b502fbf716851015023f6f9f18450f2140d4020a47510596228e not found: ID does not exist" containerID="eb777a277126b502fbf716851015023f6f9f18450f2140d4020a47510596228e" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.482662 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb777a277126b502fbf716851015023f6f9f18450f2140d4020a47510596228e"} err="failed to get container status \"eb777a277126b502fbf716851015023f6f9f18450f2140d4020a47510596228e\": rpc error: code = NotFound desc = could not find container \"eb777a277126b502fbf716851015023f6f9f18450f2140d4020a47510596228e\": container with ID starting with eb777a277126b502fbf716851015023f6f9f18450f2140d4020a47510596228e not found: ID does not exist" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.482691 5024 scope.go:117] "RemoveContainer" containerID="66157e9b720c6adc08d61f04523724c88b516e6cfd4d2e86f15195c7f6fb49b7" Nov 28 17:25:17 crc kubenswrapper[5024]: E1128 17:25:17.483209 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66157e9b720c6adc08d61f04523724c88b516e6cfd4d2e86f15195c7f6fb49b7\": container with ID starting with 66157e9b720c6adc08d61f04523724c88b516e6cfd4d2e86f15195c7f6fb49b7 not found: ID does not exist" containerID="66157e9b720c6adc08d61f04523724c88b516e6cfd4d2e86f15195c7f6fb49b7" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.483241 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66157e9b720c6adc08d61f04523724c88b516e6cfd4d2e86f15195c7f6fb49b7"} err="failed to get container status \"66157e9b720c6adc08d61f04523724c88b516e6cfd4d2e86f15195c7f6fb49b7\": rpc error: code = NotFound desc = could not find container \"66157e9b720c6adc08d61f04523724c88b516e6cfd4d2e86f15195c7f6fb49b7\": container with ID starting with 66157e9b720c6adc08d61f04523724c88b516e6cfd4d2e86f15195c7f6fb49b7 not found: ID does not exist" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.483270 5024 scope.go:117] "RemoveContainer" containerID="ecf8122d5c400eb4351dc46db74fc1e2b69b6d56f346e6c155811f8f3c9fc775" Nov 28 17:25:17 crc kubenswrapper[5024]: E1128 17:25:17.483603 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecf8122d5c400eb4351dc46db74fc1e2b69b6d56f346e6c155811f8f3c9fc775\": container with ID starting with ecf8122d5c400eb4351dc46db74fc1e2b69b6d56f346e6c155811f8f3c9fc775 not found: ID does not exist" containerID="ecf8122d5c400eb4351dc46db74fc1e2b69b6d56f346e6c155811f8f3c9fc775" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.483625 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf8122d5c400eb4351dc46db74fc1e2b69b6d56f346e6c155811f8f3c9fc775"} err="failed to get container status \"ecf8122d5c400eb4351dc46db74fc1e2b69b6d56f346e6c155811f8f3c9fc775\": rpc error: code = NotFound desc = could not find container \"ecf8122d5c400eb4351dc46db74fc1e2b69b6d56f346e6c155811f8f3c9fc775\": container with ID starting with ecf8122d5c400eb4351dc46db74fc1e2b69b6d56f346e6c155811f8f3c9fc775 not found: ID does not exist" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.483638 5024 scope.go:117] "RemoveContainer" containerID="f1beefeec3de591612d593e2b9ec24ebd94db7e1f1f9bc77381259d9c24afe15" Nov 28 17:25:17 crc kubenswrapper[5024]: E1128 17:25:17.483899 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1beefeec3de591612d593e2b9ec24ebd94db7e1f1f9bc77381259d9c24afe15\": container with ID starting with f1beefeec3de591612d593e2b9ec24ebd94db7e1f1f9bc77381259d9c24afe15 not found: ID does not exist" containerID="f1beefeec3de591612d593e2b9ec24ebd94db7e1f1f9bc77381259d9c24afe15" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.483918 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1beefeec3de591612d593e2b9ec24ebd94db7e1f1f9bc77381259d9c24afe15"} err="failed to get container status \"f1beefeec3de591612d593e2b9ec24ebd94db7e1f1f9bc77381259d9c24afe15\": rpc error: code = NotFound desc = could not find container \"f1beefeec3de591612d593e2b9ec24ebd94db7e1f1f9bc77381259d9c24afe15\": container with ID starting with f1beefeec3de591612d593e2b9ec24ebd94db7e1f1f9bc77381259d9c24afe15 not found: ID does not exist" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.645865 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.663620 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.761832 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:17 crc kubenswrapper[5024]: E1128 17:25:17.762468 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="proxy-httpd" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.762491 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="proxy-httpd" Nov 28 17:25:17 crc kubenswrapper[5024]: E1128 17:25:17.762512 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="sg-core" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.762518 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="sg-core" Nov 28 17:25:17 crc kubenswrapper[5024]: E1128 17:25:17.762543 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="ceilometer-notification-agent" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.762550 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="ceilometer-notification-agent" Nov 28 17:25:17 crc kubenswrapper[5024]: E1128 17:25:17.762573 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8494f18f-160a-41ae-802d-a490037f0aec" containerName="init" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.762581 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8494f18f-160a-41ae-802d-a490037f0aec" containerName="init" Nov 28 17:25:17 crc kubenswrapper[5024]: E1128 17:25:17.762596 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="ceilometer-central-agent" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.762604 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="ceilometer-central-agent" Nov 28 17:25:17 crc kubenswrapper[5024]: E1128 17:25:17.762626 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8494f18f-160a-41ae-802d-a490037f0aec" containerName="dnsmasq-dns" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.762632 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8494f18f-160a-41ae-802d-a490037f0aec" containerName="dnsmasq-dns" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.762995 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="ceilometer-central-agent" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.763048 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="ceilometer-notification-agent" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.763061 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8494f18f-160a-41ae-802d-a490037f0aec" containerName="dnsmasq-dns" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.763087 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="sg-core" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.763134 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" containerName="proxy-httpd" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.765593 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.768544 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.768940 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.797077 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.823572 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.823669 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nl6r\" (UniqueName: \"kubernetes.io/projected/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-kube-api-access-6nl6r\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.823734 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-run-httpd\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.823829 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-log-httpd\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.823848 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.823897 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-config-data\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.823994 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-scripts\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.926305 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.926654 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nl6r\" (UniqueName: \"kubernetes.io/projected/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-kube-api-access-6nl6r\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.926708 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-run-httpd\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.926785 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-log-httpd\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.926805 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.926863 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-config-data\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.926940 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-scripts\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.927155 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-run-httpd\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.927257 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-log-httpd\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.933606 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.941506 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.941934 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-config-data\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.942760 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-scripts\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:17 crc kubenswrapper[5024]: I1128 17:25:17.947857 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nl6r\" (UniqueName: \"kubernetes.io/projected/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-kube-api-access-6nl6r\") pod \"ceilometer-0\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " pod="openstack/ceilometer-0" Nov 28 17:25:18 crc kubenswrapper[5024]: I1128 17:25:18.186526 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:25:18 crc kubenswrapper[5024]: I1128 17:25:18.261556 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8e273701-bfd9-47a7-801f-79587c45b401","Type":"ContainerStarted","Data":"6f4f50ebd41355b6ed31b7005a870b865e493a272861fcba9a7b196e8222d971"} Nov 28 17:25:18 crc kubenswrapper[5024]: I1128 17:25:18.533370 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed08872d-895b-4d55-bd98-9b12b32f28f6" path="/var/lib/kubelet/pods/ed08872d-895b-4d55-bd98-9b12b32f28f6/volumes" Nov 28 17:25:18 crc kubenswrapper[5024]: I1128 17:25:18.899240 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:18 crc kubenswrapper[5024]: I1128 17:25:18.979402 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 17:25:18 crc kubenswrapper[5024]: I1128 17:25:18.979450 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 17:25:19 crc kubenswrapper[5024]: I1128 17:25:19.146321 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:19 crc kubenswrapper[5024]: W1128 17:25:19.160050 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2ceaa7f_4c44_4e1d_be3e_3a17ed3ee1aa.slice/crio-eaf726d78329332688511a60933458955abd3307251332156b4107fbc5f1642b WatchSource:0}: Error finding container eaf726d78329332688511a60933458955abd3307251332156b4107fbc5f1642b: Status 404 returned error can't find the container with id eaf726d78329332688511a60933458955abd3307251332156b4107fbc5f1642b Nov 28 17:25:19 crc kubenswrapper[5024]: I1128 17:25:19.302299 5024 generic.go:334] "Generic (PLEG): container finished" podID="e179cfb9-4f0a-4b45-9c10-ad14432d7fc4" containerID="e69794901f3ecc2a3703449de30328242032fbe02e17b29237a656216d3fd946" exitCode=0 Nov 28 17:25:19 crc kubenswrapper[5024]: I1128 17:25:19.302375 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b77gw" event={"ID":"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4","Type":"ContainerDied","Data":"e69794901f3ecc2a3703449de30328242032fbe02e17b29237a656216d3fd946"} Nov 28 17:25:19 crc kubenswrapper[5024]: I1128 17:25:19.304823 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa","Type":"ContainerStarted","Data":"eaf726d78329332688511a60933458955abd3307251332156b4107fbc5f1642b"} Nov 28 17:25:19 crc kubenswrapper[5024]: I1128 17:25:19.538453 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 17:25:20 crc kubenswrapper[5024]: I1128 17:25:20.316988 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa","Type":"ContainerStarted","Data":"5c39c89bdf6ed185a2f6a453a5dceae95545f0b89044d9ff4618e24f4ff3c2bc"} Nov 28 17:25:20 crc kubenswrapper[5024]: I1128 17:25:20.319633 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8e273701-bfd9-47a7-801f-79587c45b401","Type":"ContainerStarted","Data":"f64db06990d5b2696f7ec543948ba32cb9c70f1cdd9684b94f581d1be9ae1973"} Nov 28 17:25:20 crc kubenswrapper[5024]: I1128 17:25:20.321676 5024 generic.go:334] "Generic (PLEG): container finished" podID="41c514f3-354d-4254-aea0-821b23140252" containerID="705d6097270b28f54efe431dc19d441a16e92988d96d63d9b1a3847adc062c0a" exitCode=0 Nov 28 17:25:20 crc kubenswrapper[5024]: I1128 17:25:20.321830 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j7k6k" event={"ID":"41c514f3-354d-4254-aea0-821b23140252","Type":"ContainerDied","Data":"705d6097270b28f54efe431dc19d441a16e92988d96d63d9b1a3847adc062c0a"} Nov 28 17:25:20 crc kubenswrapper[5024]: I1128 17:25:20.801972 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:20 crc kubenswrapper[5024]: I1128 17:25:20.902564 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xflkn\" (UniqueName: \"kubernetes.io/projected/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-kube-api-access-xflkn\") pod \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " Nov 28 17:25:20 crc kubenswrapper[5024]: I1128 17:25:20.902947 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-config-data\") pod \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " Nov 28 17:25:20 crc kubenswrapper[5024]: I1128 17:25:20.902991 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-scripts\") pod \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " Nov 28 17:25:20 crc kubenswrapper[5024]: I1128 17:25:20.903122 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-combined-ca-bundle\") pod \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\" (UID: \"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4\") " Nov 28 17:25:20 crc kubenswrapper[5024]: I1128 17:25:20.907396 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-kube-api-access-xflkn" (OuterVolumeSpecName: "kube-api-access-xflkn") pod "e179cfb9-4f0a-4b45-9c10-ad14432d7fc4" (UID: "e179cfb9-4f0a-4b45-9c10-ad14432d7fc4"). InnerVolumeSpecName "kube-api-access-xflkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:20 crc kubenswrapper[5024]: I1128 17:25:20.917002 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-scripts" (OuterVolumeSpecName: "scripts") pod "e179cfb9-4f0a-4b45-9c10-ad14432d7fc4" (UID: "e179cfb9-4f0a-4b45-9c10-ad14432d7fc4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:20 crc kubenswrapper[5024]: I1128 17:25:20.966290 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e179cfb9-4f0a-4b45-9c10-ad14432d7fc4" (UID: "e179cfb9-4f0a-4b45-9c10-ad14432d7fc4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:20 crc kubenswrapper[5024]: I1128 17:25:20.966401 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-config-data" (OuterVolumeSpecName: "config-data") pod "e179cfb9-4f0a-4b45-9c10-ad14432d7fc4" (UID: "e179cfb9-4f0a-4b45-9c10-ad14432d7fc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:21 crc kubenswrapper[5024]: I1128 17:25:21.005205 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:21 crc kubenswrapper[5024]: I1128 17:25:21.005245 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xflkn\" (UniqueName: \"kubernetes.io/projected/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-kube-api-access-xflkn\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:21 crc kubenswrapper[5024]: I1128 17:25:21.005257 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:21 crc kubenswrapper[5024]: I1128 17:25:21.005265 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:21 crc kubenswrapper[5024]: I1128 17:25:21.339453 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa","Type":"ContainerStarted","Data":"be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5"} Nov 28 17:25:21 crc kubenswrapper[5024]: I1128 17:25:21.342328 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b77gw" event={"ID":"e179cfb9-4f0a-4b45-9c10-ad14432d7fc4","Type":"ContainerDied","Data":"f4cb110daa5792e146860b2bc045698f9fc8e62c8f72c9f6121320e03edfd41d"} Nov 28 17:25:21 crc kubenswrapper[5024]: I1128 17:25:21.342381 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b77gw" Nov 28 17:25:21 crc kubenswrapper[5024]: I1128 17:25:21.342381 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4cb110daa5792e146860b2bc045698f9fc8e62c8f72c9f6121320e03edfd41d" Nov 28 17:25:21 crc kubenswrapper[5024]: I1128 17:25:21.544556 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:21 crc kubenswrapper[5024]: I1128 17:25:21.545070 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d83db18c-461d-4602-a0e8-3f6506e931b4" containerName="nova-api-log" containerID="cri-o://e5e764fbc5dd3faa51f5a706dca0f8fd67189b0404f57c75fcad2e5de253e3b7" gracePeriod=30 Nov 28 17:25:21 crc kubenswrapper[5024]: I1128 17:25:21.545335 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d83db18c-461d-4602-a0e8-3f6506e931b4" containerName="nova-api-api" containerID="cri-o://1f6bb472ddee69d36d1d1c7728cdc50a464f4b085a09e4f47e4abdc92859dd5d" gracePeriod=30 Nov 28 17:25:21 crc kubenswrapper[5024]: I1128 17:25:21.587119 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:25:21 crc kubenswrapper[5024]: I1128 17:25:21.587340 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e7e8cf69-7066-4bc5-86e3-ecbfa374edf0" containerName="nova-scheduler-scheduler" containerID="cri-o://2002ba8740148d9123b89de89794cc3e44a4cb9799f754198e92802b1e76b3c0" gracePeriod=30 Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.231980 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.335886 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-config-data\") pod \"41c514f3-354d-4254-aea0-821b23140252\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.336072 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-combined-ca-bundle\") pod \"41c514f3-354d-4254-aea0-821b23140252\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.336224 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whrqr\" (UniqueName: \"kubernetes.io/projected/41c514f3-354d-4254-aea0-821b23140252-kube-api-access-whrqr\") pod \"41c514f3-354d-4254-aea0-821b23140252\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.336264 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-scripts\") pod \"41c514f3-354d-4254-aea0-821b23140252\" (UID: \"41c514f3-354d-4254-aea0-821b23140252\") " Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.346830 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-scripts" (OuterVolumeSpecName: "scripts") pod "41c514f3-354d-4254-aea0-821b23140252" (UID: "41c514f3-354d-4254-aea0-821b23140252"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.346874 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c514f3-354d-4254-aea0-821b23140252-kube-api-access-whrqr" (OuterVolumeSpecName: "kube-api-access-whrqr") pod "41c514f3-354d-4254-aea0-821b23140252" (UID: "41c514f3-354d-4254-aea0-821b23140252"). InnerVolumeSpecName "kube-api-access-whrqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.415083 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-config-data" (OuterVolumeSpecName: "config-data") pod "41c514f3-354d-4254-aea0-821b23140252" (UID: "41c514f3-354d-4254-aea0-821b23140252"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.419817 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41c514f3-354d-4254-aea0-821b23140252" (UID: "41c514f3-354d-4254-aea0-821b23140252"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.440521 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j7k6k" event={"ID":"41c514f3-354d-4254-aea0-821b23140252","Type":"ContainerDied","Data":"bb760cdbdb8cea3cf3fcceeb0dec862fa33d523bb09db8cc12737e6286f62b68"} Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.440725 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb760cdbdb8cea3cf3fcceeb0dec862fa33d523bb09db8cc12737e6286f62b68" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.440868 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j7k6k" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.444732 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whrqr\" (UniqueName: \"kubernetes.io/projected/41c514f3-354d-4254-aea0-821b23140252-kube-api-access-whrqr\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.444770 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.444783 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.444797 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c514f3-354d-4254-aea0-821b23140252-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.480460 5024 generic.go:334] "Generic (PLEG): container finished" podID="d83db18c-461d-4602-a0e8-3f6506e931b4" containerID="1f6bb472ddee69d36d1d1c7728cdc50a464f4b085a09e4f47e4abdc92859dd5d" exitCode=0 Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.480496 5024 generic.go:334] "Generic (PLEG): container finished" podID="d83db18c-461d-4602-a0e8-3f6506e931b4" containerID="e5e764fbc5dd3faa51f5a706dca0f8fd67189b0404f57c75fcad2e5de253e3b7" exitCode=143 Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.480580 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d83db18c-461d-4602-a0e8-3f6506e931b4","Type":"ContainerDied","Data":"1f6bb472ddee69d36d1d1c7728cdc50a464f4b085a09e4f47e4abdc92859dd5d"} Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.480608 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d83db18c-461d-4602-a0e8-3f6506e931b4","Type":"ContainerDied","Data":"e5e764fbc5dd3faa51f5a706dca0f8fd67189b0404f57c75fcad2e5de253e3b7"} Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.552453 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-api" containerID="cri-o://d0ce6d04ef261f7ab68ab98250d111df1f840840ecdfedb94ac5b910bd19a99f" gracePeriod=30 Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.552748 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-listener" containerID="cri-o://24c41c98268193f4ca5c5cadce42a96f477a16c6f041bc6a681724efab993bdb" gracePeriod=30 Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.552907 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-notifier" containerID="cri-o://f64db06990d5b2696f7ec543948ba32cb9c70f1cdd9684b94f581d1be9ae1973" gracePeriod=30 Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.552984 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-evaluator" containerID="cri-o://6f4f50ebd41355b6ed31b7005a870b865e493a272861fcba9a7b196e8222d971" gracePeriod=30 Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.568302 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:25:22 crc kubenswrapper[5024]: E1128 17:25:22.568817 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.638155 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=6.323814329 podStartE2EDuration="13.638133181s" podCreationTimestamp="2025-11-28 17:25:09 +0000 UTC" firstStartedPulling="2025-11-28 17:25:14.220098967 +0000 UTC m=+1616.269019872" lastFinishedPulling="2025-11-28 17:25:21.534417819 +0000 UTC m=+1623.583338724" observedRunningTime="2025-11-28 17:25:22.590989827 +0000 UTC m=+1624.639910732" watchObservedRunningTime="2025-11-28 17:25:22.638133181 +0000 UTC m=+1624.687054086" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.912291 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8e273701-bfd9-47a7-801f-79587c45b401","Type":"ContainerStarted","Data":"24c41c98268193f4ca5c5cadce42a96f477a16c6f041bc6a681724efab993bdb"} Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.912753 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 17:25:22 crc kubenswrapper[5024]: E1128 17:25:22.913129 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e179cfb9-4f0a-4b45-9c10-ad14432d7fc4" containerName="nova-manage" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.913145 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e179cfb9-4f0a-4b45-9c10-ad14432d7fc4" containerName="nova-manage" Nov 28 17:25:22 crc kubenswrapper[5024]: E1128 17:25:22.913182 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c514f3-354d-4254-aea0-821b23140252" containerName="nova-cell1-conductor-db-sync" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.913190 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c514f3-354d-4254-aea0-821b23140252" containerName="nova-cell1-conductor-db-sync" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.913389 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e179cfb9-4f0a-4b45-9c10-ad14432d7fc4" containerName="nova-manage" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.913430 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c514f3-354d-4254-aea0-821b23140252" containerName="nova-cell1-conductor-db-sync" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.914185 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.914261 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 17:25:22 crc kubenswrapper[5024]: I1128 17:25:22.919891 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.095332 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.095776 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.096134 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbz8\" (UniqueName: \"kubernetes.io/projected/bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f-kube-api-access-brbz8\") pod \"nova-cell1-conductor-0\" (UID: \"bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.134595 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:25:23 crc kubenswrapper[5024]: E1128 17:25:23.183570 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e273701_bfd9_47a7_801f_79587c45b401.slice/crio-f64db06990d5b2696f7ec543948ba32cb9c70f1cdd9684b94f581d1be9ae1973.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7e8cf69_7066_4bc5_86e3_ecbfa374edf0.slice/crio-conmon-2002ba8740148d9123b89de89794cc3e44a4cb9799f754198e92802b1e76b3c0.scope\": RecentStats: unable to find data in memory cache]" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.198460 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.198645 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.198858 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brbz8\" (UniqueName: \"kubernetes.io/projected/bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f-kube-api-access-brbz8\") pod \"nova-cell1-conductor-0\" (UID: \"bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.208509 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.210081 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.219874 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brbz8\" (UniqueName: \"kubernetes.io/projected/bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f-kube-api-access-brbz8\") pod \"nova-cell1-conductor-0\" (UID: \"bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.300167 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83db18c-461d-4602-a0e8-3f6506e931b4-config-data\") pod \"d83db18c-461d-4602-a0e8-3f6506e931b4\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.300373 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbrjq\" (UniqueName: \"kubernetes.io/projected/d83db18c-461d-4602-a0e8-3f6506e931b4-kube-api-access-vbrjq\") pod \"d83db18c-461d-4602-a0e8-3f6506e931b4\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.300466 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83db18c-461d-4602-a0e8-3f6506e931b4-combined-ca-bundle\") pod \"d83db18c-461d-4602-a0e8-3f6506e931b4\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.300560 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83db18c-461d-4602-a0e8-3f6506e931b4-logs\") pod \"d83db18c-461d-4602-a0e8-3f6506e931b4\" (UID: \"d83db18c-461d-4602-a0e8-3f6506e931b4\") " Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.301097 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83db18c-461d-4602-a0e8-3f6506e931b4-logs" (OuterVolumeSpecName: "logs") pod "d83db18c-461d-4602-a0e8-3f6506e931b4" (UID: "d83db18c-461d-4602-a0e8-3f6506e931b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.301662 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83db18c-461d-4602-a0e8-3f6506e931b4-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.305330 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83db18c-461d-4602-a0e8-3f6506e931b4-kube-api-access-vbrjq" (OuterVolumeSpecName: "kube-api-access-vbrjq") pod "d83db18c-461d-4602-a0e8-3f6506e931b4" (UID: "d83db18c-461d-4602-a0e8-3f6506e931b4"). InnerVolumeSpecName "kube-api-access-vbrjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.334046 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d83db18c-461d-4602-a0e8-3f6506e931b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d83db18c-461d-4602-a0e8-3f6506e931b4" (UID: "d83db18c-461d-4602-a0e8-3f6506e931b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.336269 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d83db18c-461d-4602-a0e8-3f6506e931b4-config-data" (OuterVolumeSpecName: "config-data") pod "d83db18c-461d-4602-a0e8-3f6506e931b4" (UID: "d83db18c-461d-4602-a0e8-3f6506e931b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.353858 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.404198 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83db18c-461d-4602-a0e8-3f6506e931b4-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.404229 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbrjq\" (UniqueName: \"kubernetes.io/projected/d83db18c-461d-4602-a0e8-3f6506e931b4-kube-api-access-vbrjq\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.404243 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83db18c-461d-4602-a0e8-3f6506e931b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.452059 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.505868 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-config-data\") pod \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\" (UID: \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\") " Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.505916 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v84l4\" (UniqueName: \"kubernetes.io/projected/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-kube-api-access-v84l4\") pod \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\" (UID: \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\") " Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.506032 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-combined-ca-bundle\") pod \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\" (UID: \"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0\") " Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.509574 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-kube-api-access-v84l4" (OuterVolumeSpecName: "kube-api-access-v84l4") pod "e7e8cf69-7066-4bc5-86e3-ecbfa374edf0" (UID: "e7e8cf69-7066-4bc5-86e3-ecbfa374edf0"). InnerVolumeSpecName "kube-api-access-v84l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.538705 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7e8cf69-7066-4bc5-86e3-ecbfa374edf0" (UID: "e7e8cf69-7066-4bc5-86e3-ecbfa374edf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.541652 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-config-data" (OuterVolumeSpecName: "config-data") pod "e7e8cf69-7066-4bc5-86e3-ecbfa374edf0" (UID: "e7e8cf69-7066-4bc5-86e3-ecbfa374edf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.586471 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa","Type":"ContainerStarted","Data":"6234e7e15ba38b448178dac991e71e861eaffd01417f4d185f1379700fdcc6ac"} Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.589235 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d83db18c-461d-4602-a0e8-3f6506e931b4","Type":"ContainerDied","Data":"be18fe70e3570ac7bdc98b54c5440027d86055ff183a2e58ef43a6525c212833"} Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.589274 5024 scope.go:117] "RemoveContainer" containerID="1f6bb472ddee69d36d1d1c7728cdc50a464f4b085a09e4f47e4abdc92859dd5d" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.589408 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.599559 5024 generic.go:334] "Generic (PLEG): container finished" podID="8e273701-bfd9-47a7-801f-79587c45b401" containerID="f64db06990d5b2696f7ec543948ba32cb9c70f1cdd9684b94f581d1be9ae1973" exitCode=0 Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.599597 5024 generic.go:334] "Generic (PLEG): container finished" podID="8e273701-bfd9-47a7-801f-79587c45b401" containerID="6f4f50ebd41355b6ed31b7005a870b865e493a272861fcba9a7b196e8222d971" exitCode=0 Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.599607 5024 generic.go:334] "Generic (PLEG): container finished" podID="8e273701-bfd9-47a7-801f-79587c45b401" containerID="d0ce6d04ef261f7ab68ab98250d111df1f840840ecdfedb94ac5b910bd19a99f" exitCode=0 Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.599699 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8e273701-bfd9-47a7-801f-79587c45b401","Type":"ContainerDied","Data":"f64db06990d5b2696f7ec543948ba32cb9c70f1cdd9684b94f581d1be9ae1973"} Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.599731 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8e273701-bfd9-47a7-801f-79587c45b401","Type":"ContainerDied","Data":"6f4f50ebd41355b6ed31b7005a870b865e493a272861fcba9a7b196e8222d971"} Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.599741 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8e273701-bfd9-47a7-801f-79587c45b401","Type":"ContainerDied","Data":"d0ce6d04ef261f7ab68ab98250d111df1f840840ecdfedb94ac5b910bd19a99f"} Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.605858 5024 generic.go:334] "Generic (PLEG): container finished" podID="e7e8cf69-7066-4bc5-86e3-ecbfa374edf0" containerID="2002ba8740148d9123b89de89794cc3e44a4cb9799f754198e92802b1e76b3c0" exitCode=0 Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.605899 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0","Type":"ContainerDied","Data":"2002ba8740148d9123b89de89794cc3e44a4cb9799f754198e92802b1e76b3c0"} Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.605926 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e7e8cf69-7066-4bc5-86e3-ecbfa374edf0","Type":"ContainerDied","Data":"da6540dab0dbd906acd43405039413c39030ba7ac8fe65947916145aee91323b"} Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.606085 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.611197 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.611220 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v84l4\" (UniqueName: \"kubernetes.io/projected/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-kube-api-access-v84l4\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.611231 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.647594 5024 scope.go:117] "RemoveContainer" containerID="e5e764fbc5dd3faa51f5a706dca0f8fd67189b0404f57c75fcad2e5de253e3b7" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.673481 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.724383 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.724781 5024 scope.go:117] "RemoveContainer" containerID="2002ba8740148d9123b89de89794cc3e44a4cb9799f754198e92802b1e76b3c0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.738663 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:23 crc kubenswrapper[5024]: E1128 17:25:23.739282 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83db18c-461d-4602-a0e8-3f6506e931b4" containerName="nova-api-api" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.739309 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83db18c-461d-4602-a0e8-3f6506e931b4" containerName="nova-api-api" Nov 28 17:25:23 crc kubenswrapper[5024]: E1128 17:25:23.739335 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e8cf69-7066-4bc5-86e3-ecbfa374edf0" containerName="nova-scheduler-scheduler" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.739344 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e8cf69-7066-4bc5-86e3-ecbfa374edf0" containerName="nova-scheduler-scheduler" Nov 28 17:25:23 crc kubenswrapper[5024]: E1128 17:25:23.739390 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83db18c-461d-4602-a0e8-3f6506e931b4" containerName="nova-api-log" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.739399 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83db18c-461d-4602-a0e8-3f6506e931b4" containerName="nova-api-log" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.739678 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83db18c-461d-4602-a0e8-3f6506e931b4" containerName="nova-api-log" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.739726 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83db18c-461d-4602-a0e8-3f6506e931b4" containerName="nova-api-api" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.739750 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e8cf69-7066-4bc5-86e3-ecbfa374edf0" containerName="nova-scheduler-scheduler" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.741603 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.744864 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.758755 5024 scope.go:117] "RemoveContainer" containerID="2002ba8740148d9123b89de89794cc3e44a4cb9799f754198e92802b1e76b3c0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.763374 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:25:23 crc kubenswrapper[5024]: E1128 17:25:23.764397 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2002ba8740148d9123b89de89794cc3e44a4cb9799f754198e92802b1e76b3c0\": container with ID starting with 2002ba8740148d9123b89de89794cc3e44a4cb9799f754198e92802b1e76b3c0 not found: ID does not exist" containerID="2002ba8740148d9123b89de89794cc3e44a4cb9799f754198e92802b1e76b3c0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.764538 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2002ba8740148d9123b89de89794cc3e44a4cb9799f754198e92802b1e76b3c0"} err="failed to get container status \"2002ba8740148d9123b89de89794cc3e44a4cb9799f754198e92802b1e76b3c0\": rpc error: code = NotFound desc = could not find container \"2002ba8740148d9123b89de89794cc3e44a4cb9799f754198e92802b1e76b3c0\": container with ID starting with 2002ba8740148d9123b89de89794cc3e44a4cb9799f754198e92802b1e76b3c0 not found: ID does not exist" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.773369 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.782742 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.793379 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.797033 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.800521 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.814861 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.920460 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-config-data\") pod \"nova-scheduler-0\" (UID: \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.920544 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " pod="openstack/nova-api-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.920638 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-logs\") pod \"nova-api-0\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " pod="openstack/nova-api-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.920685 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv7h5\" (UniqueName: \"kubernetes.io/projected/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-kube-api-access-jv7h5\") pod \"nova-scheduler-0\" (UID: \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.920761 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-config-data\") pod \"nova-api-0\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " pod="openstack/nova-api-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.920787 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:23 crc kubenswrapper[5024]: I1128 17:25:23.920862 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xff2c\" (UniqueName: \"kubernetes.io/projected/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-kube-api-access-xff2c\") pod \"nova-api-0\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " pod="openstack/nova-api-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.011585 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.027170 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-config-data\") pod \"nova-api-0\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " pod="openstack/nova-api-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.027236 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.027394 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xff2c\" (UniqueName: \"kubernetes.io/projected/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-kube-api-access-xff2c\") pod \"nova-api-0\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " pod="openstack/nova-api-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.027573 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-config-data\") pod \"nova-scheduler-0\" (UID: \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.027639 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " pod="openstack/nova-api-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.027736 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-logs\") pod \"nova-api-0\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " pod="openstack/nova-api-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.027806 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv7h5\" (UniqueName: \"kubernetes.io/projected/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-kube-api-access-jv7h5\") pod \"nova-scheduler-0\" (UID: \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.030106 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-logs\") pod \"nova-api-0\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " pod="openstack/nova-api-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.034006 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " pod="openstack/nova-api-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.034151 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-config-data\") pod \"nova-scheduler-0\" (UID: \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.045765 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-config-data\") pod \"nova-api-0\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " pod="openstack/nova-api-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.052955 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv7h5\" (UniqueName: \"kubernetes.io/projected/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-kube-api-access-jv7h5\") pod \"nova-scheduler-0\" (UID: \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.054983 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\") " pod="openstack/nova-scheduler-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.067651 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xff2c\" (UniqueName: \"kubernetes.io/projected/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-kube-api-access-xff2c\") pod \"nova-api-0\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " pod="openstack/nova-api-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.124702 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.366009 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.544813 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83db18c-461d-4602-a0e8-3f6506e931b4" path="/var/lib/kubelet/pods/d83db18c-461d-4602-a0e8-3f6506e931b4/volumes" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.545537 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e8cf69-7066-4bc5-86e3-ecbfa374edf0" path="/var/lib/kubelet/pods/e7e8cf69-7066-4bc5-86e3-ecbfa374edf0/volumes" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.631799 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f","Type":"ContainerStarted","Data":"c67928691d7527690f2969cf16db4290f284701139764a07450c0ca8c1ed1b66"} Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.631847 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f","Type":"ContainerStarted","Data":"b6ed3baef1c129feeb9f24a92d71f4a2816c5b18453da608bf0b1c9725f03832"} Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.633271 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.639300 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa","Type":"ContainerStarted","Data":"5d19bf4ab2ce781c2c654b3a3083a451e1e363c05d8743bd06ef33b57d541af0"} Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.639967 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.683774 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.683752558 podStartE2EDuration="2.683752558s" podCreationTimestamp="2025-11-28 17:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:25:24.656597258 +0000 UTC m=+1626.705518183" watchObservedRunningTime="2025-11-28 17:25:24.683752558 +0000 UTC m=+1626.732673463" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.689382 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.801314027 podStartE2EDuration="7.689363289s" podCreationTimestamp="2025-11-28 17:25:17 +0000 UTC" firstStartedPulling="2025-11-28 17:25:19.163298082 +0000 UTC m=+1621.212218987" lastFinishedPulling="2025-11-28 17:25:24.051347344 +0000 UTC m=+1626.100268249" observedRunningTime="2025-11-28 17:25:24.676802108 +0000 UTC m=+1626.725723013" watchObservedRunningTime="2025-11-28 17:25:24.689363289 +0000 UTC m=+1626.738284194" Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.710697 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:25:24 crc kubenswrapper[5024]: W1128 17:25:24.993368 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418cf2a6_87ce_451e_bbd2_c65f5112fd9f.slice/crio-17d10e832f039354767da77e74b25f1ca15b79151e82fc9f4ff916c8e6a5f942 WatchSource:0}: Error finding container 17d10e832f039354767da77e74b25f1ca15b79151e82fc9f4ff916c8e6a5f942: Status 404 returned error can't find the container with id 17d10e832f039354767da77e74b25f1ca15b79151e82fc9f4ff916c8e6a5f942 Nov 28 17:25:24 crc kubenswrapper[5024]: I1128 17:25:24.994869 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:25 crc kubenswrapper[5024]: I1128 17:25:25.654190 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2c0c8c6-e4ff-490b-94c5-772a7066c4db","Type":"ContainerStarted","Data":"a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59"} Nov 28 17:25:25 crc kubenswrapper[5024]: I1128 17:25:25.655257 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2c0c8c6-e4ff-490b-94c5-772a7066c4db","Type":"ContainerStarted","Data":"92f97510784ef041436345d9b5a5f0a57cf507796f5a977149c68f2be4d0a0bb"} Nov 28 17:25:25 crc kubenswrapper[5024]: I1128 17:25:25.658671 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"418cf2a6-87ce-451e-bbd2-c65f5112fd9f","Type":"ContainerStarted","Data":"81d31f98ecd9e3e7103dab6bd705e0586d9fd22ef84e390b9cf218097d9699d9"} Nov 28 17:25:25 crc kubenswrapper[5024]: I1128 17:25:25.658716 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"418cf2a6-87ce-451e-bbd2-c65f5112fd9f","Type":"ContainerStarted","Data":"5f365fe8959fa4d63bfc921304bd500311dc70138cf929e212051b8ed5ec99f6"} Nov 28 17:25:25 crc kubenswrapper[5024]: I1128 17:25:25.658727 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"418cf2a6-87ce-451e-bbd2-c65f5112fd9f","Type":"ContainerStarted","Data":"17d10e832f039354767da77e74b25f1ca15b79151e82fc9f4ff916c8e6a5f942"} Nov 28 17:25:25 crc kubenswrapper[5024]: I1128 17:25:25.693480 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.693461111 podStartE2EDuration="2.693461111s" podCreationTimestamp="2025-11-28 17:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:25:25.68717897 +0000 UTC m=+1627.736099875" watchObservedRunningTime="2025-11-28 17:25:25.693461111 +0000 UTC m=+1627.742382016" Nov 28 17:25:25 crc kubenswrapper[5024]: I1128 17:25:25.729434 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.729408853 podStartE2EDuration="2.729408853s" podCreationTimestamp="2025-11-28 17:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:25:25.706789413 +0000 UTC m=+1627.755710328" watchObservedRunningTime="2025-11-28 17:25:25.729408853 +0000 UTC m=+1627.778329768" Nov 28 17:25:29 crc kubenswrapper[5024]: I1128 17:25:29.126048 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 17:25:33 crc kubenswrapper[5024]: I1128 17:25:33.483647 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 28 17:25:34 crc kubenswrapper[5024]: I1128 17:25:34.126389 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 17:25:34 crc kubenswrapper[5024]: I1128 17:25:34.158904 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 17:25:34 crc kubenswrapper[5024]: I1128 17:25:34.367256 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 17:25:34 crc kubenswrapper[5024]: I1128 17:25:34.368576 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 17:25:34 crc kubenswrapper[5024]: I1128 17:25:34.790013 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 17:25:35 crc kubenswrapper[5024]: I1128 17:25:35.408698 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="418cf2a6-87ce-451e-bbd2-c65f5112fd9f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.246:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 17:25:35 crc kubenswrapper[5024]: I1128 17:25:35.408676 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="418cf2a6-87ce-451e-bbd2-c65f5112fd9f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.246:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 17:25:35 crc kubenswrapper[5024]: I1128 17:25:35.497901 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:25:35 crc kubenswrapper[5024]: E1128 17:25:35.498409 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.371081 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.371680 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.372274 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.372569 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.375719 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.375972 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.599224 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-8p964"] Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.602138 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.625514 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-8p964"] Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.639770 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.639842 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.639864 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2nx2\" (UniqueName: \"kubernetes.io/projected/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-kube-api-access-j2nx2\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.639924 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.640068 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.640350 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-config\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.742990 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.743067 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.743092 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2nx2\" (UniqueName: \"kubernetes.io/projected/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-kube-api-access-j2nx2\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.743144 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.743255 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.743537 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-config\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.744196 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.744339 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.744424 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.744674 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-config\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.744929 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.763519 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2nx2\" (UniqueName: \"kubernetes.io/projected/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-kube-api-access-j2nx2\") pod \"dnsmasq-dns-6b7bbf7cf9-8p964\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:44 crc kubenswrapper[5024]: I1128 17:25:44.954110 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:45 crc kubenswrapper[5024]: I1128 17:25:45.523165 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-8p964"] Nov 28 17:25:45 crc kubenswrapper[5024]: I1128 17:25:45.908035 5024 generic.go:334] "Generic (PLEG): container finished" podID="813fefa2-4c39-465a-bf6a-5b2517cd1101" containerID="e53c2f40c52f3e1a783029846d8a1f534a416fb19302e8176450f34ad4d8e1c1" exitCode=137 Nov 28 17:25:45 crc kubenswrapper[5024]: I1128 17:25:45.908374 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"813fefa2-4c39-465a-bf6a-5b2517cd1101","Type":"ContainerDied","Data":"e53c2f40c52f3e1a783029846d8a1f534a416fb19302e8176450f34ad4d8e1c1"} Nov 28 17:25:45 crc kubenswrapper[5024]: I1128 17:25:45.908404 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"813fefa2-4c39-465a-bf6a-5b2517cd1101","Type":"ContainerDied","Data":"d16f0481a5db854726f21dbbbc03bd6bf3d4c0907238d0588ee55e9ee4ecd6ee"} Nov 28 17:25:45 crc kubenswrapper[5024]: I1128 17:25:45.908415 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d16f0481a5db854726f21dbbbc03bd6bf3d4c0907238d0588ee55e9ee4ecd6ee" Nov 28 17:25:45 crc kubenswrapper[5024]: I1128 17:25:45.913342 5024 generic.go:334] "Generic (PLEG): container finished" podID="7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" containerID="5ce8e26427e63d7007809d45640c78adc3775dbaf98d596992330a7b86bf527b" exitCode=0 Nov 28 17:25:45 crc kubenswrapper[5024]: I1128 17:25:45.913411 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" event={"ID":"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6","Type":"ContainerDied","Data":"5ce8e26427e63d7007809d45640c78adc3775dbaf98d596992330a7b86bf527b"} Nov 28 17:25:45 crc kubenswrapper[5024]: I1128 17:25:45.913442 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" event={"ID":"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6","Type":"ContainerStarted","Data":"28e35d444ed2064a81d740427f8ac4f5af7add46e2c3c6dd3531265d3b062c32"} Nov 28 17:25:45 crc kubenswrapper[5024]: I1128 17:25:45.919475 5024 generic.go:334] "Generic (PLEG): container finished" podID="6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" containerID="0c93f45552828c1a71680441d5d5cceccba04134856cb3be3590076385b9ebf5" exitCode=137 Nov 28 17:25:45 crc kubenswrapper[5024]: I1128 17:25:45.920693 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f","Type":"ContainerDied","Data":"0c93f45552828c1a71680441d5d5cceccba04134856cb3be3590076385b9ebf5"} Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.167699 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.175850 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.321981 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-config-data\") pod \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.322475 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpgpg\" (UniqueName: \"kubernetes.io/projected/813fefa2-4c39-465a-bf6a-5b2517cd1101-kube-api-access-cpgpg\") pod \"813fefa2-4c39-465a-bf6a-5b2517cd1101\" (UID: \"813fefa2-4c39-465a-bf6a-5b2517cd1101\") " Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.322575 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813fefa2-4c39-465a-bf6a-5b2517cd1101-config-data\") pod \"813fefa2-4c39-465a-bf6a-5b2517cd1101\" (UID: \"813fefa2-4c39-465a-bf6a-5b2517cd1101\") " Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.322702 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xzqq\" (UniqueName: \"kubernetes.io/projected/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-kube-api-access-4xzqq\") pod \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.322796 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-logs\") pod \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.322850 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-combined-ca-bundle\") pod \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\" (UID: \"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f\") " Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.323001 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813fefa2-4c39-465a-bf6a-5b2517cd1101-combined-ca-bundle\") pod \"813fefa2-4c39-465a-bf6a-5b2517cd1101\" (UID: \"813fefa2-4c39-465a-bf6a-5b2517cd1101\") " Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.323748 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-logs" (OuterVolumeSpecName: "logs") pod "6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" (UID: "6b08007c-49c1-4a11-ad55-5ee9fecf6d3f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.324538 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.329977 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-kube-api-access-4xzqq" (OuterVolumeSpecName: "kube-api-access-4xzqq") pod "6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" (UID: "6b08007c-49c1-4a11-ad55-5ee9fecf6d3f"). InnerVolumeSpecName "kube-api-access-4xzqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.333372 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/813fefa2-4c39-465a-bf6a-5b2517cd1101-kube-api-access-cpgpg" (OuterVolumeSpecName: "kube-api-access-cpgpg") pod "813fefa2-4c39-465a-bf6a-5b2517cd1101" (UID: "813fefa2-4c39-465a-bf6a-5b2517cd1101"). InnerVolumeSpecName "kube-api-access-cpgpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.353199 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/813fefa2-4c39-465a-bf6a-5b2517cd1101-config-data" (OuterVolumeSpecName: "config-data") pod "813fefa2-4c39-465a-bf6a-5b2517cd1101" (UID: "813fefa2-4c39-465a-bf6a-5b2517cd1101"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.366141 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-config-data" (OuterVolumeSpecName: "config-data") pod "6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" (UID: "6b08007c-49c1-4a11-ad55-5ee9fecf6d3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.369548 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" (UID: "6b08007c-49c1-4a11-ad55-5ee9fecf6d3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.382841 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/813fefa2-4c39-465a-bf6a-5b2517cd1101-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "813fefa2-4c39-465a-bf6a-5b2517cd1101" (UID: "813fefa2-4c39-465a-bf6a-5b2517cd1101"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.427117 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.427150 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813fefa2-4c39-465a-bf6a-5b2517cd1101-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.427160 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.427170 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpgpg\" (UniqueName: \"kubernetes.io/projected/813fefa2-4c39-465a-bf6a-5b2517cd1101-kube-api-access-cpgpg\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.427184 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813fefa2-4c39-465a-bf6a-5b2517cd1101-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.427193 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xzqq\" (UniqueName: \"kubernetes.io/projected/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f-kube-api-access-4xzqq\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.934569 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b08007c-49c1-4a11-ad55-5ee9fecf6d3f","Type":"ContainerDied","Data":"d9cc2a71cbe5f36048bec477cf464eb397fbef89451a52576d66ebb9449b14db"} Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.934625 5024 scope.go:117] "RemoveContainer" containerID="0c93f45552828c1a71680441d5d5cceccba04134856cb3be3590076385b9ebf5" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.934849 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.939408 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.939434 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" event={"ID":"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6","Type":"ContainerStarted","Data":"1980af0c961613437f8f3e2d92132589eb9fb79454bdd40ac383c730fa0e8fe6"} Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.966409 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.967093 5024 scope.go:117] "RemoveContainer" containerID="a8654b9fba746666255220cfce1c94ebfa3b5ebf7524e0146a08b4aeeb8261ff" Nov 28 17:25:46 crc kubenswrapper[5024]: I1128 17:25:46.990324 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.006758 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" podStartSLOduration=3.006733089 podStartE2EDuration="3.006733089s" podCreationTimestamp="2025-11-28 17:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:25:46.994421105 +0000 UTC m=+1649.043342030" watchObservedRunningTime="2025-11-28 17:25:47.006733089 +0000 UTC m=+1649.055654004" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.039957 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:25:47 crc kubenswrapper[5024]: E1128 17:25:47.040557 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813fefa2-4c39-465a-bf6a-5b2517cd1101" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.040578 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="813fefa2-4c39-465a-bf6a-5b2517cd1101" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 17:25:47 crc kubenswrapper[5024]: E1128 17:25:47.040620 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" containerName="nova-metadata-metadata" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.040627 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" containerName="nova-metadata-metadata" Nov 28 17:25:47 crc kubenswrapper[5024]: E1128 17:25:47.040645 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" containerName="nova-metadata-log" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.040651 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" containerName="nova-metadata-log" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.040877 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="813fefa2-4c39-465a-bf6a-5b2517cd1101" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.040907 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" containerName="nova-metadata-log" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.040931 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" containerName="nova-metadata-metadata" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.042273 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.048580 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.048864 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.060762 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.089733 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.135561 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.186101 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.188404 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.193422 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.193709 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.204469 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.213102 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.256962 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nblv4\" (UniqueName: \"kubernetes.io/projected/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-kube-api-access-nblv4\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.257097 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-config-data\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.257124 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.257233 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-logs\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.257300 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.359154 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnsqx\" (UniqueName: \"kubernetes.io/projected/512b384d-2288-4ff5-9f13-bc6df840194f-kube-api-access-wnsqx\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.359211 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512b384d-2288-4ff5-9f13-bc6df840194f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.359260 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nblv4\" (UniqueName: \"kubernetes.io/projected/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-kube-api-access-nblv4\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.359769 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-config-data\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.359832 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.359871 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512b384d-2288-4ff5-9f13-bc6df840194f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.360319 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-logs\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.360500 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/512b384d-2288-4ff5-9f13-bc6df840194f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.360596 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/512b384d-2288-4ff5-9f13-bc6df840194f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.360725 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.360812 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-logs\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.368329 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.368472 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.368914 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-config-data\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.380000 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nblv4\" (UniqueName: \"kubernetes.io/projected/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-kube-api-access-nblv4\") pod \"nova-metadata-0\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.440494 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.440789 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="ceilometer-central-agent" containerID="cri-o://5c39c89bdf6ed185a2f6a453a5dceae95545f0b89044d9ff4618e24f4ff3c2bc" gracePeriod=30 Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.440904 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="sg-core" containerID="cri-o://6234e7e15ba38b448178dac991e71e861eaffd01417f4d185f1379700fdcc6ac" gracePeriod=30 Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.440932 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="ceilometer-notification-agent" containerID="cri-o://be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5" gracePeriod=30 Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.441092 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="proxy-httpd" containerID="cri-o://5d19bf4ab2ce781c2c654b3a3083a451e1e363c05d8743bd06ef33b57d541af0" gracePeriod=30 Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.453421 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.244:3000/\": EOF" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.462991 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/512b384d-2288-4ff5-9f13-bc6df840194f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.463067 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/512b384d-2288-4ff5-9f13-bc6df840194f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.463127 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnsqx\" (UniqueName: \"kubernetes.io/projected/512b384d-2288-4ff5-9f13-bc6df840194f-kube-api-access-wnsqx\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.463154 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512b384d-2288-4ff5-9f13-bc6df840194f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.463249 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512b384d-2288-4ff5-9f13-bc6df840194f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.468011 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/512b384d-2288-4ff5-9f13-bc6df840194f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.468521 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/512b384d-2288-4ff5-9f13-bc6df840194f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.469443 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512b384d-2288-4ff5-9f13-bc6df840194f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.471575 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512b384d-2288-4ff5-9f13-bc6df840194f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.481946 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnsqx\" (UniqueName: \"kubernetes.io/projected/512b384d-2288-4ff5-9f13-bc6df840194f-kube-api-access-wnsqx\") pod \"nova-cell1-novncproxy-0\" (UID: \"512b384d-2288-4ff5-9f13-bc6df840194f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.521258 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.660903 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.824952 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.825256 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="418cf2a6-87ce-451e-bbd2-c65f5112fd9f" containerName="nova-api-log" containerID="cri-o://5f365fe8959fa4d63bfc921304bd500311dc70138cf929e212051b8ed5ec99f6" gracePeriod=30 Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.825367 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="418cf2a6-87ce-451e-bbd2-c65f5112fd9f" containerName="nova-api-api" containerID="cri-o://81d31f98ecd9e3e7103dab6bd705e0586d9fd22ef84e390b9cf218097d9699d9" gracePeriod=30 Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.968420 5024 generic.go:334] "Generic (PLEG): container finished" podID="418cf2a6-87ce-451e-bbd2-c65f5112fd9f" containerID="5f365fe8959fa4d63bfc921304bd500311dc70138cf929e212051b8ed5ec99f6" exitCode=143 Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.968479 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"418cf2a6-87ce-451e-bbd2-c65f5112fd9f","Type":"ContainerDied","Data":"5f365fe8959fa4d63bfc921304bd500311dc70138cf929e212051b8ed5ec99f6"} Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.975346 5024 generic.go:334] "Generic (PLEG): container finished" podID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerID="5d19bf4ab2ce781c2c654b3a3083a451e1e363c05d8743bd06ef33b57d541af0" exitCode=0 Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.975385 5024 generic.go:334] "Generic (PLEG): container finished" podID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerID="6234e7e15ba38b448178dac991e71e861eaffd01417f4d185f1379700fdcc6ac" exitCode=2 Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.975398 5024 generic.go:334] "Generic (PLEG): container finished" podID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerID="5c39c89bdf6ed185a2f6a453a5dceae95545f0b89044d9ff4618e24f4ff3c2bc" exitCode=0 Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.975701 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa","Type":"ContainerDied","Data":"5d19bf4ab2ce781c2c654b3a3083a451e1e363c05d8743bd06ef33b57d541af0"} Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.975768 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa","Type":"ContainerDied","Data":"6234e7e15ba38b448178dac991e71e861eaffd01417f4d185f1379700fdcc6ac"} Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.975784 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa","Type":"ContainerDied","Data":"5c39c89bdf6ed185a2f6a453a5dceae95545f0b89044d9ff4618e24f4ff3c2bc"} Nov 28 17:25:47 crc kubenswrapper[5024]: I1128 17:25:47.975846 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.012588 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.190004 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.244:3000/\": dial tcp 10.217.0.244:3000: connect: connection refused" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.193010 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:25:48 crc kubenswrapper[5024]: E1128 17:25:48.323541 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2ceaa7f_4c44_4e1d_be3e_3a17ed3ee1aa.slice/crio-be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2ceaa7f_4c44_4e1d_be3e_3a17ed3ee1aa.slice/crio-conmon-be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5.scope\": RecentStats: unable to find data in memory cache]" Nov 28 17:25:48 crc kubenswrapper[5024]: E1128 17:25:48.323636 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2ceaa7f_4c44_4e1d_be3e_3a17ed3ee1aa.slice/crio-conmon-be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2ceaa7f_4c44_4e1d_be3e_3a17ed3ee1aa.slice/crio-be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5.scope\": RecentStats: unable to find data in memory cache]" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.516248 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:25:48 crc kubenswrapper[5024]: E1128 17:25:48.516503 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.531069 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b08007c-49c1-4a11-ad55-5ee9fecf6d3f" path="/var/lib/kubelet/pods/6b08007c-49c1-4a11-ad55-5ee9fecf6d3f/volumes" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.532300 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="813fefa2-4c39-465a-bf6a-5b2517cd1101" path="/var/lib/kubelet/pods/813fefa2-4c39-465a-bf6a-5b2517cd1101/volumes" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.552670 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.709059 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-combined-ca-bundle\") pod \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.709205 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-scripts\") pod \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.709323 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-sg-core-conf-yaml\") pod \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.709362 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nl6r\" (UniqueName: \"kubernetes.io/projected/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-kube-api-access-6nl6r\") pod \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.709438 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-log-httpd\") pod \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.709463 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-config-data\") pod \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.709531 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-run-httpd\") pod \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\" (UID: \"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa\") " Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.711755 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" (UID: "b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.714021 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" (UID: "b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.714730 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-kube-api-access-6nl6r" (OuterVolumeSpecName: "kube-api-access-6nl6r") pod "b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" (UID: "b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa"). InnerVolumeSpecName "kube-api-access-6nl6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.715160 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-scripts" (OuterVolumeSpecName: "scripts") pod "b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" (UID: "b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.758208 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" (UID: "b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.813230 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.813272 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.813285 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.813299 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.813316 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nl6r\" (UniqueName: \"kubernetes.io/projected/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-kube-api-access-6nl6r\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.831179 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" (UID: "b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.855552 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-config-data" (OuterVolumeSpecName: "config-data") pod "b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" (UID: "b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.918142 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.918186 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:48 crc kubenswrapper[5024]: I1128 17:25:48.999220 5024 generic.go:334] "Generic (PLEG): container finished" podID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerID="be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5" exitCode=0 Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:48.999306 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa","Type":"ContainerDied","Data":"be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5"} Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:48.999343 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa","Type":"ContainerDied","Data":"eaf726d78329332688511a60933458955abd3307251332156b4107fbc5f1642b"} Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:48.999369 5024 scope.go:117] "RemoveContainer" containerID="5d19bf4ab2ce781c2c654b3a3083a451e1e363c05d8743bd06ef33b57d541af0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:48.999627 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.005645 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"512b384d-2288-4ff5-9f13-bc6df840194f","Type":"ContainerStarted","Data":"78fe64e2bb88236131c523859e01f4ade0abd62c4aec37586e90721a4c0cfe5b"} Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.005697 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"512b384d-2288-4ff5-9f13-bc6df840194f","Type":"ContainerStarted","Data":"b5b36ad21c7a5bb51de886b04449bb2750107e03fde66de7b4d9bc749e463940"} Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.014253 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad","Type":"ContainerStarted","Data":"20d505276d662458b1b835a9d05cc17e31fd143dcf977c0e52b1c9d6d6df22a1"} Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.014306 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad","Type":"ContainerStarted","Data":"53a9bed32b2e554c79badda1d74c143f4c555505985f67cac9b42a4a9ace201d"} Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.014320 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad","Type":"ContainerStarted","Data":"2fddd1a6f862510026c152c8d3546aa5693d285e4e9637aa6a1c968211e5c34b"} Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.050591 5024 scope.go:117] "RemoveContainer" containerID="6234e7e15ba38b448178dac991e71e861eaffd01417f4d185f1379700fdcc6ac" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.076039 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.075997056 podStartE2EDuration="2.075997056s" podCreationTimestamp="2025-11-28 17:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:25:49.039454886 +0000 UTC m=+1651.088375791" watchObservedRunningTime="2025-11-28 17:25:49.075997056 +0000 UTC m=+1651.124917971" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.081001 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.080973178 podStartE2EDuration="3.080973178s" podCreationTimestamp="2025-11-28 17:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:25:49.059997476 +0000 UTC m=+1651.108918391" watchObservedRunningTime="2025-11-28 17:25:49.080973178 +0000 UTC m=+1651.129894083" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.102452 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.106147 5024 scope.go:117] "RemoveContainer" containerID="be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.126841 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.144590 5024 scope.go:117] "RemoveContainer" containerID="5c39c89bdf6ed185a2f6a453a5dceae95545f0b89044d9ff4618e24f4ff3c2bc" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.149766 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:49 crc kubenswrapper[5024]: E1128 17:25:49.150521 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="ceilometer-central-agent" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.150543 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="ceilometer-central-agent" Nov 28 17:25:49 crc kubenswrapper[5024]: E1128 17:25:49.150566 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="proxy-httpd" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.150575 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="proxy-httpd" Nov 28 17:25:49 crc kubenswrapper[5024]: E1128 17:25:49.150605 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="ceilometer-notification-agent" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.150611 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="ceilometer-notification-agent" Nov 28 17:25:49 crc kubenswrapper[5024]: E1128 17:25:49.150638 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="sg-core" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.150644 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="sg-core" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.150853 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="sg-core" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.150868 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="proxy-httpd" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.150891 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="ceilometer-notification-agent" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.150902 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" containerName="ceilometer-central-agent" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.160960 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.165467 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.171583 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.191249 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.202138 5024 scope.go:117] "RemoveContainer" containerID="5d19bf4ab2ce781c2c654b3a3083a451e1e363c05d8743bd06ef33b57d541af0" Nov 28 17:25:49 crc kubenswrapper[5024]: E1128 17:25:49.203011 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d19bf4ab2ce781c2c654b3a3083a451e1e363c05d8743bd06ef33b57d541af0\": container with ID starting with 5d19bf4ab2ce781c2c654b3a3083a451e1e363c05d8743bd06ef33b57d541af0 not found: ID does not exist" containerID="5d19bf4ab2ce781c2c654b3a3083a451e1e363c05d8743bd06ef33b57d541af0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.203080 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d19bf4ab2ce781c2c654b3a3083a451e1e363c05d8743bd06ef33b57d541af0"} err="failed to get container status \"5d19bf4ab2ce781c2c654b3a3083a451e1e363c05d8743bd06ef33b57d541af0\": rpc error: code = NotFound desc = could not find container \"5d19bf4ab2ce781c2c654b3a3083a451e1e363c05d8743bd06ef33b57d541af0\": container with ID starting with 5d19bf4ab2ce781c2c654b3a3083a451e1e363c05d8743bd06ef33b57d541af0 not found: ID does not exist" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.203119 5024 scope.go:117] "RemoveContainer" containerID="6234e7e15ba38b448178dac991e71e861eaffd01417f4d185f1379700fdcc6ac" Nov 28 17:25:49 crc kubenswrapper[5024]: E1128 17:25:49.204736 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6234e7e15ba38b448178dac991e71e861eaffd01417f4d185f1379700fdcc6ac\": container with ID starting with 6234e7e15ba38b448178dac991e71e861eaffd01417f4d185f1379700fdcc6ac not found: ID does not exist" containerID="6234e7e15ba38b448178dac991e71e861eaffd01417f4d185f1379700fdcc6ac" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.204769 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6234e7e15ba38b448178dac991e71e861eaffd01417f4d185f1379700fdcc6ac"} err="failed to get container status \"6234e7e15ba38b448178dac991e71e861eaffd01417f4d185f1379700fdcc6ac\": rpc error: code = NotFound desc = could not find container \"6234e7e15ba38b448178dac991e71e861eaffd01417f4d185f1379700fdcc6ac\": container with ID starting with 6234e7e15ba38b448178dac991e71e861eaffd01417f4d185f1379700fdcc6ac not found: ID does not exist" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.204789 5024 scope.go:117] "RemoveContainer" containerID="be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5" Nov 28 17:25:49 crc kubenswrapper[5024]: E1128 17:25:49.205460 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5\": container with ID starting with be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5 not found: ID does not exist" containerID="be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.205491 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5"} err="failed to get container status \"be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5\": rpc error: code = NotFound desc = could not find container \"be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5\": container with ID starting with be52891669a509c3dccaaeaa689765b6f1e45a955478956ea6ebfd13942c1fb5 not found: ID does not exist" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.205510 5024 scope.go:117] "RemoveContainer" containerID="5c39c89bdf6ed185a2f6a453a5dceae95545f0b89044d9ff4618e24f4ff3c2bc" Nov 28 17:25:49 crc kubenswrapper[5024]: E1128 17:25:49.207161 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c39c89bdf6ed185a2f6a453a5dceae95545f0b89044d9ff4618e24f4ff3c2bc\": container with ID starting with 5c39c89bdf6ed185a2f6a453a5dceae95545f0b89044d9ff4618e24f4ff3c2bc not found: ID does not exist" containerID="5c39c89bdf6ed185a2f6a453a5dceae95545f0b89044d9ff4618e24f4ff3c2bc" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.207193 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c39c89bdf6ed185a2f6a453a5dceae95545f0b89044d9ff4618e24f4ff3c2bc"} err="failed to get container status \"5c39c89bdf6ed185a2f6a453a5dceae95545f0b89044d9ff4618e24f4ff3c2bc\": rpc error: code = NotFound desc = could not find container \"5c39c89bdf6ed185a2f6a453a5dceae95545f0b89044d9ff4618e24f4ff3c2bc\": container with ID starting with 5c39c89bdf6ed185a2f6a453a5dceae95545f0b89044d9ff4618e24f4ff3c2bc not found: ID does not exist" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.332271 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-config-data\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.332673 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.332724 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b82a034-95aa-410e-b4ef-f99f9d589588-run-httpd\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.332744 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b82a034-95aa-410e-b4ef-f99f9d589588-log-httpd\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.332805 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhkwk\" (UniqueName: \"kubernetes.io/projected/3b82a034-95aa-410e-b4ef-f99f9d589588-kube-api-access-dhkwk\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.332825 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.332958 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-scripts\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.435219 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-scripts\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.435319 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-config-data\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.435365 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.435385 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b82a034-95aa-410e-b4ef-f99f9d589588-run-httpd\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.435402 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b82a034-95aa-410e-b4ef-f99f9d589588-log-httpd\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.435445 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhkwk\" (UniqueName: \"kubernetes.io/projected/3b82a034-95aa-410e-b4ef-f99f9d589588-kube-api-access-dhkwk\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.435465 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.436209 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b82a034-95aa-410e-b4ef-f99f9d589588-run-httpd\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.436586 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b82a034-95aa-410e-b4ef-f99f9d589588-log-httpd\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.441777 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.443085 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-scripts\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.443550 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-config-data\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.454910 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.464186 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhkwk\" (UniqueName: \"kubernetes.io/projected/3b82a034-95aa-410e-b4ef-f99f9d589588-kube-api-access-dhkwk\") pod \"ceilometer-0\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.495554 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:25:49 crc kubenswrapper[5024]: I1128 17:25:49.830557 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:50 crc kubenswrapper[5024]: I1128 17:25:50.009964 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:50 crc kubenswrapper[5024]: I1128 17:25:50.036328 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b82a034-95aa-410e-b4ef-f99f9d589588","Type":"ContainerStarted","Data":"bebaf1c11f8f7c94729039a82e44ed3ecc06008f3769d97d6dd3968151709dcb"} Nov 28 17:25:50 crc kubenswrapper[5024]: I1128 17:25:50.515257 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa" path="/var/lib/kubelet/pods/b2ceaa7f-4c44-4e1d-be3e-3a17ed3ee1aa/volumes" Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.069736 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b82a034-95aa-410e-b4ef-f99f9d589588","Type":"ContainerStarted","Data":"5534693ed33518128d085e55b425872ea2992061fc8c869464ac092fe94ba9f2"} Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.072308 5024 generic.go:334] "Generic (PLEG): container finished" podID="418cf2a6-87ce-451e-bbd2-c65f5112fd9f" containerID="81d31f98ecd9e3e7103dab6bd705e0586d9fd22ef84e390b9cf218097d9699d9" exitCode=0 Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.072341 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"418cf2a6-87ce-451e-bbd2-c65f5112fd9f","Type":"ContainerDied","Data":"81d31f98ecd9e3e7103dab6bd705e0586d9fd22ef84e390b9cf218097d9699d9"} Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.524930 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.611161 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xff2c\" (UniqueName: \"kubernetes.io/projected/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-kube-api-access-xff2c\") pod \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.611248 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-config-data\") pod \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.611371 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-combined-ca-bundle\") pod \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.611394 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-logs\") pod \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\" (UID: \"418cf2a6-87ce-451e-bbd2-c65f5112fd9f\") " Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.612365 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-logs" (OuterVolumeSpecName: "logs") pod "418cf2a6-87ce-451e-bbd2-c65f5112fd9f" (UID: "418cf2a6-87ce-451e-bbd2-c65f5112fd9f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.651640 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-kube-api-access-xff2c" (OuterVolumeSpecName: "kube-api-access-xff2c") pod "418cf2a6-87ce-451e-bbd2-c65f5112fd9f" (UID: "418cf2a6-87ce-451e-bbd2-c65f5112fd9f"). InnerVolumeSpecName "kube-api-access-xff2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.655184 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-config-data" (OuterVolumeSpecName: "config-data") pod "418cf2a6-87ce-451e-bbd2-c65f5112fd9f" (UID: "418cf2a6-87ce-451e-bbd2-c65f5112fd9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.675066 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "418cf2a6-87ce-451e-bbd2-c65f5112fd9f" (UID: "418cf2a6-87ce-451e-bbd2-c65f5112fd9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.715152 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.715196 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.715210 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xff2c\" (UniqueName: \"kubernetes.io/projected/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-kube-api-access-xff2c\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:51 crc kubenswrapper[5024]: I1128 17:25:51.715227 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418cf2a6-87ce-451e-bbd2-c65f5112fd9f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.086798 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b82a034-95aa-410e-b4ef-f99f9d589588","Type":"ContainerStarted","Data":"c5d0dffd9d1e6215caf387a1cad8bfb9acc5c1a6d12dc8fe44028c09db403141"} Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.094583 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"418cf2a6-87ce-451e-bbd2-c65f5112fd9f","Type":"ContainerDied","Data":"17d10e832f039354767da77e74b25f1ca15b79151e82fc9f4ff916c8e6a5f942"} Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.094638 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.094689 5024 scope.go:117] "RemoveContainer" containerID="81d31f98ecd9e3e7103dab6bd705e0586d9fd22ef84e390b9cf218097d9699d9" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.126774 5024 scope.go:117] "RemoveContainer" containerID="5f365fe8959fa4d63bfc921304bd500311dc70138cf929e212051b8ed5ec99f6" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.164257 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.188151 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.202172 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:52 crc kubenswrapper[5024]: E1128 17:25:52.203047 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418cf2a6-87ce-451e-bbd2-c65f5112fd9f" containerName="nova-api-log" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.203079 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="418cf2a6-87ce-451e-bbd2-c65f5112fd9f" containerName="nova-api-log" Nov 28 17:25:52 crc kubenswrapper[5024]: E1128 17:25:52.203139 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418cf2a6-87ce-451e-bbd2-c65f5112fd9f" containerName="nova-api-api" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.203148 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="418cf2a6-87ce-451e-bbd2-c65f5112fd9f" containerName="nova-api-api" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.203450 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="418cf2a6-87ce-451e-bbd2-c65f5112fd9f" containerName="nova-api-log" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.203516 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="418cf2a6-87ce-451e-bbd2-c65f5112fd9f" containerName="nova-api-api" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.205429 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.210993 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.211200 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.211215 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.214209 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.229639 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-config-data\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.229795 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.229877 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.229961 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b002b3a3-108d-4b46-9457-e96a43d82367-logs\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.229987 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfwr2\" (UniqueName: \"kubernetes.io/projected/b002b3a3-108d-4b46-9457-e96a43d82367-kube-api-access-nfwr2\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.230212 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-public-tls-certs\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.332863 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.332931 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.332992 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b002b3a3-108d-4b46-9457-e96a43d82367-logs\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.333020 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfwr2\" (UniqueName: \"kubernetes.io/projected/b002b3a3-108d-4b46-9457-e96a43d82367-kube-api-access-nfwr2\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.333106 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-public-tls-certs\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.333172 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-config-data\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.333471 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b002b3a3-108d-4b46-9457-e96a43d82367-logs\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.338799 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-config-data\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.340866 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.341479 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-public-tls-certs\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.347196 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.350092 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfwr2\" (UniqueName: \"kubernetes.io/projected/b002b3a3-108d-4b46-9457-e96a43d82367-kube-api-access-nfwr2\") pod \"nova-api-0\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.527637 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="418cf2a6-87ce-451e-bbd2-c65f5112fd9f" path="/var/lib/kubelet/pods/418cf2a6-87ce-451e-bbd2-c65f5112fd9f/volumes" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.529507 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.617521 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.661387 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 17:25:52 crc kubenswrapper[5024]: I1128 17:25:52.661431 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.117375 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b82a034-95aa-410e-b4ef-f99f9d589588","Type":"ContainerStarted","Data":"c3cdbc0a86fbde30665db73ac80ca5c5479876e17ba34133890ad5f082cb1764"} Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.122154 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.130093 5024 generic.go:334] "Generic (PLEG): container finished" podID="8e273701-bfd9-47a7-801f-79587c45b401" containerID="24c41c98268193f4ca5c5cadce42a96f477a16c6f041bc6a681724efab993bdb" exitCode=137 Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.130151 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8e273701-bfd9-47a7-801f-79587c45b401","Type":"ContainerDied","Data":"24c41c98268193f4ca5c5cadce42a96f477a16c6f041bc6a681724efab993bdb"} Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.556922 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.566225 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-config-data\") pod \"8e273701-bfd9-47a7-801f-79587c45b401\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.566469 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28c2g\" (UniqueName: \"kubernetes.io/projected/8e273701-bfd9-47a7-801f-79587c45b401-kube-api-access-28c2g\") pod \"8e273701-bfd9-47a7-801f-79587c45b401\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.566701 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-scripts\") pod \"8e273701-bfd9-47a7-801f-79587c45b401\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.566787 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-combined-ca-bundle\") pod \"8e273701-bfd9-47a7-801f-79587c45b401\" (UID: \"8e273701-bfd9-47a7-801f-79587c45b401\") " Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.573273 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e273701-bfd9-47a7-801f-79587c45b401-kube-api-access-28c2g" (OuterVolumeSpecName: "kube-api-access-28c2g") pod "8e273701-bfd9-47a7-801f-79587c45b401" (UID: "8e273701-bfd9-47a7-801f-79587c45b401"). InnerVolumeSpecName "kube-api-access-28c2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.575422 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-scripts" (OuterVolumeSpecName: "scripts") pod "8e273701-bfd9-47a7-801f-79587c45b401" (UID: "8e273701-bfd9-47a7-801f-79587c45b401"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.673001 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28c2g\" (UniqueName: \"kubernetes.io/projected/8e273701-bfd9-47a7-801f-79587c45b401-kube-api-access-28c2g\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.673056 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.772230 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-config-data" (OuterVolumeSpecName: "config-data") pod "8e273701-bfd9-47a7-801f-79587c45b401" (UID: "8e273701-bfd9-47a7-801f-79587c45b401"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.775051 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.787567 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e273701-bfd9-47a7-801f-79587c45b401" (UID: "8e273701-bfd9-47a7-801f-79587c45b401"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:53 crc kubenswrapper[5024]: I1128 17:25:53.877326 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e273701-bfd9-47a7-801f-79587c45b401-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.144002 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8e273701-bfd9-47a7-801f-79587c45b401","Type":"ContainerDied","Data":"fc82a364a8609e0fc95e9bed35e6ec70c1af83bec6a9d797fd398eaaeeac3848"} Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.144378 5024 scope.go:117] "RemoveContainer" containerID="24c41c98268193f4ca5c5cadce42a96f477a16c6f041bc6a681724efab993bdb" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.144056 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.153195 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b82a034-95aa-410e-b4ef-f99f9d589588","Type":"ContainerStarted","Data":"f20209f13a620a466492055eabfdadb014087c3360c6c66421c44bdfb80310fb"} Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.153278 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="ceilometer-central-agent" containerID="cri-o://5534693ed33518128d085e55b425872ea2992061fc8c869464ac092fe94ba9f2" gracePeriod=30 Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.153353 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.153376 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="proxy-httpd" containerID="cri-o://f20209f13a620a466492055eabfdadb014087c3360c6c66421c44bdfb80310fb" gracePeriod=30 Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.153410 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="sg-core" containerID="cri-o://c3cdbc0a86fbde30665db73ac80ca5c5479876e17ba34133890ad5f082cb1764" gracePeriod=30 Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.153436 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="ceilometer-notification-agent" containerID="cri-o://c5d0dffd9d1e6215caf387a1cad8bfb9acc5c1a6d12dc8fe44028c09db403141" gracePeriod=30 Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.167751 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b002b3a3-108d-4b46-9457-e96a43d82367","Type":"ContainerStarted","Data":"2e4b1a71b340e5c98a7bd1c2d986286fc45a14d197b1721228542a01c97a8c3b"} Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.167829 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b002b3a3-108d-4b46-9457-e96a43d82367","Type":"ContainerStarted","Data":"f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437"} Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.167843 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b002b3a3-108d-4b46-9457-e96a43d82367","Type":"ContainerStarted","Data":"00dc659b8ebe597a0f5e9acee2a17ad3dbfa911daa73580c6bbeee3afb1d2b50"} Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.189764 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.444686508 podStartE2EDuration="5.189743929s" podCreationTimestamp="2025-11-28 17:25:49 +0000 UTC" firstStartedPulling="2025-11-28 17:25:50.02279828 +0000 UTC m=+1652.071719185" lastFinishedPulling="2025-11-28 17:25:53.767855701 +0000 UTC m=+1655.816776606" observedRunningTime="2025-11-28 17:25:54.183113009 +0000 UTC m=+1656.232033934" watchObservedRunningTime="2025-11-28 17:25:54.189743929 +0000 UTC m=+1656.238664824" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.225194 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.235817 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.237937 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.237917253 podStartE2EDuration="2.237917253s" podCreationTimestamp="2025-11-28 17:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:25:54.220994337 +0000 UTC m=+1656.269915242" watchObservedRunningTime="2025-11-28 17:25:54.237917253 +0000 UTC m=+1656.286838158" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.258057 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 28 17:25:54 crc kubenswrapper[5024]: E1128 17:25:54.258691 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-notifier" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.258708 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-notifier" Nov 28 17:25:54 crc kubenswrapper[5024]: E1128 17:25:54.258736 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-evaluator" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.258746 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-evaluator" Nov 28 17:25:54 crc kubenswrapper[5024]: E1128 17:25:54.258779 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-api" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.258786 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-api" Nov 28 17:25:54 crc kubenswrapper[5024]: E1128 17:25:54.258820 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-listener" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.258829 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-listener" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.259258 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-notifier" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.259279 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-listener" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.259294 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-evaluator" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.259320 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e273701-bfd9-47a7-801f-79587c45b401" containerName="aodh-api" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.263284 5024 scope.go:117] "RemoveContainer" containerID="f64db06990d5b2696f7ec543948ba32cb9c70f1cdd9684b94f581d1be9ae1973" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.265444 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.268604 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.269245 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.269579 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-rjjzq" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.269860 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.275723 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.289649 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.291844 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-scripts\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.291980 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.292017 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-config-data\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.292436 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-public-tls-certs\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.292772 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-internal-tls-certs\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.292905 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tll8z\" (UniqueName: \"kubernetes.io/projected/2e9856a4-36be-4430-a239-6a83871dd474-kube-api-access-tll8z\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.388220 5024 scope.go:117] "RemoveContainer" containerID="6f4f50ebd41355b6ed31b7005a870b865e493a272861fcba9a7b196e8222d971" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.396801 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-scripts\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.396946 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.396983 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-config-data\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.397150 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-public-tls-certs\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.397287 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-internal-tls-certs\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.397364 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tll8z\" (UniqueName: \"kubernetes.io/projected/2e9856a4-36be-4430-a239-6a83871dd474-kube-api-access-tll8z\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.403701 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-scripts\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.403785 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-internal-tls-certs\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.404528 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.404603 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-config-data\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.408944 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-public-tls-certs\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.422585 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tll8z\" (UniqueName: \"kubernetes.io/projected/2e9856a4-36be-4430-a239-6a83871dd474-kube-api-access-tll8z\") pod \"aodh-0\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.516137 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e273701-bfd9-47a7-801f-79587c45b401" path="/var/lib/kubelet/pods/8e273701-bfd9-47a7-801f-79587c45b401/volumes" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.535300 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.543276 5024 scope.go:117] "RemoveContainer" containerID="d0ce6d04ef261f7ab68ab98250d111df1f840840ecdfedb94ac5b910bd19a99f" Nov 28 17:25:54 crc kubenswrapper[5024]: I1128 17:25:54.955300 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.036741 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7kctn"] Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.037002 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-7kctn" podUID="4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" containerName="dnsmasq-dns" containerID="cri-o://aa7ec8dac83a7805f245b77a7995cd4d89452a3fd3b858fed5f1c591450dd90c" gracePeriod=10 Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.095528 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 17:25:55 crc kubenswrapper[5024]: W1128 17:25:55.100345 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e9856a4_36be_4430_a239_6a83871dd474.slice/crio-e0bac06a3eb611f04c591fa7361cb93a70bd4f42edeaac05d7c957e14abf99d5 WatchSource:0}: Error finding container e0bac06a3eb611f04c591fa7361cb93a70bd4f42edeaac05d7c957e14abf99d5: Status 404 returned error can't find the container with id e0bac06a3eb611f04c591fa7361cb93a70bd4f42edeaac05d7c957e14abf99d5 Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.183477 5024 generic.go:334] "Generic (PLEG): container finished" podID="4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" containerID="aa7ec8dac83a7805f245b77a7995cd4d89452a3fd3b858fed5f1c591450dd90c" exitCode=0 Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.183816 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7kctn" event={"ID":"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e","Type":"ContainerDied","Data":"aa7ec8dac83a7805f245b77a7995cd4d89452a3fd3b858fed5f1c591450dd90c"} Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.188956 5024 generic.go:334] "Generic (PLEG): container finished" podID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerID="f20209f13a620a466492055eabfdadb014087c3360c6c66421c44bdfb80310fb" exitCode=0 Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.188978 5024 generic.go:334] "Generic (PLEG): container finished" podID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerID="c3cdbc0a86fbde30665db73ac80ca5c5479876e17ba34133890ad5f082cb1764" exitCode=2 Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.189009 5024 generic.go:334] "Generic (PLEG): container finished" podID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerID="c5d0dffd9d1e6215caf387a1cad8bfb9acc5c1a6d12dc8fe44028c09db403141" exitCode=0 Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.189143 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b82a034-95aa-410e-b4ef-f99f9d589588","Type":"ContainerDied","Data":"f20209f13a620a466492055eabfdadb014087c3360c6c66421c44bdfb80310fb"} Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.189163 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b82a034-95aa-410e-b4ef-f99f9d589588","Type":"ContainerDied","Data":"c3cdbc0a86fbde30665db73ac80ca5c5479876e17ba34133890ad5f082cb1764"} Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.189200 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b82a034-95aa-410e-b4ef-f99f9d589588","Type":"ContainerDied","Data":"c5d0dffd9d1e6215caf387a1cad8bfb9acc5c1a6d12dc8fe44028c09db403141"} Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.192258 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e9856a4-36be-4430-a239-6a83871dd474","Type":"ContainerStarted","Data":"e0bac06a3eb611f04c591fa7361cb93a70bd4f42edeaac05d7c957e14abf99d5"} Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.708100 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.751615 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-dns-swift-storage-0\") pod \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.751688 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-config\") pod \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.751724 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-ovsdbserver-sb\") pod \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.751794 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-ovsdbserver-nb\") pod \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.751915 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-dns-svc\") pod \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.751931 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5krp2\" (UniqueName: \"kubernetes.io/projected/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-kube-api-access-5krp2\") pod \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\" (UID: \"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e\") " Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.755915 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-kube-api-access-5krp2" (OuterVolumeSpecName: "kube-api-access-5krp2") pod "4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" (UID: "4de4b55b-9b1c-4dba-bf83-a71dc6bac13e"). InnerVolumeSpecName "kube-api-access-5krp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.816982 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" (UID: "4de4b55b-9b1c-4dba-bf83-a71dc6bac13e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.818767 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" (UID: "4de4b55b-9b1c-4dba-bf83-a71dc6bac13e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.827686 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" (UID: "4de4b55b-9b1c-4dba-bf83-a71dc6bac13e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.841422 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" (UID: "4de4b55b-9b1c-4dba-bf83-a71dc6bac13e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.847791 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-config" (OuterVolumeSpecName: "config") pod "4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" (UID: "4de4b55b-9b1c-4dba-bf83-a71dc6bac13e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.855250 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.855302 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5krp2\" (UniqueName: \"kubernetes.io/projected/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-kube-api-access-5krp2\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.855322 5024 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.855333 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.855342 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:55 crc kubenswrapper[5024]: I1128 17:25:55.855350 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:56 crc kubenswrapper[5024]: I1128 17:25:56.205890 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e9856a4-36be-4430-a239-6a83871dd474","Type":"ContainerStarted","Data":"dac23869b1289bb8fbec39dcebab8b98a0621be86532f7ee1b00735a86d98a58"} Nov 28 17:25:56 crc kubenswrapper[5024]: I1128 17:25:56.208888 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7kctn" event={"ID":"4de4b55b-9b1c-4dba-bf83-a71dc6bac13e","Type":"ContainerDied","Data":"eded1712a97a186a9876a31da090c87c2f54191b6240668acd60003e71ed0861"} Nov 28 17:25:56 crc kubenswrapper[5024]: I1128 17:25:56.208965 5024 scope.go:117] "RemoveContainer" containerID="aa7ec8dac83a7805f245b77a7995cd4d89452a3fd3b858fed5f1c591450dd90c" Nov 28 17:25:56 crc kubenswrapper[5024]: I1128 17:25:56.208989 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-7kctn" Nov 28 17:25:56 crc kubenswrapper[5024]: I1128 17:25:56.236150 5024 scope.go:117] "RemoveContainer" containerID="c909157eb0fc792216efc46466096aee63e8aa5f7798c1d7d43588b0de9d93f2" Nov 28 17:25:56 crc kubenswrapper[5024]: I1128 17:25:56.256756 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7kctn"] Nov 28 17:25:56 crc kubenswrapper[5024]: I1128 17:25:56.270635 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7kctn"] Nov 28 17:25:56 crc kubenswrapper[5024]: I1128 17:25:56.528168 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" path="/var/lib/kubelet/pods/4de4b55b-9b1c-4dba-bf83-a71dc6bac13e/volumes" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.172284 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5j9x4"] Nov 28 17:25:57 crc kubenswrapper[5024]: E1128 17:25:57.173242 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" containerName="init" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.173259 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" containerName="init" Nov 28 17:25:57 crc kubenswrapper[5024]: E1128 17:25:57.173304 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" containerName="dnsmasq-dns" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.173312 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" containerName="dnsmasq-dns" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.173539 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de4b55b-9b1c-4dba-bf83-a71dc6bac13e" containerName="dnsmasq-dns" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.175357 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.190191 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j9x4"] Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.194140 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsd7z\" (UniqueName: \"kubernetes.io/projected/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-kube-api-access-tsd7z\") pod \"redhat-marketplace-5j9x4\" (UID: \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\") " pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.194343 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-utilities\") pod \"redhat-marketplace-5j9x4\" (UID: \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\") " pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.194598 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-catalog-content\") pod \"redhat-marketplace-5j9x4\" (UID: \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\") " pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.225681 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e9856a4-36be-4430-a239-6a83871dd474","Type":"ContainerStarted","Data":"062af754ed65bc2e923d006f61d93f3298b86daf2ec7cd8afc5e8819a4b504cc"} Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.296633 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-utilities\") pod \"redhat-marketplace-5j9x4\" (UID: \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\") " pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.296787 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-catalog-content\") pod \"redhat-marketplace-5j9x4\" (UID: \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\") " pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.296835 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsd7z\" (UniqueName: \"kubernetes.io/projected/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-kube-api-access-tsd7z\") pod \"redhat-marketplace-5j9x4\" (UID: \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\") " pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.297574 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-catalog-content\") pod \"redhat-marketplace-5j9x4\" (UID: \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\") " pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.297674 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-utilities\") pod \"redhat-marketplace-5j9x4\" (UID: \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\") " pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.318547 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsd7z\" (UniqueName: \"kubernetes.io/projected/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-kube-api-access-tsd7z\") pod \"redhat-marketplace-5j9x4\" (UID: \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\") " pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.503786 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.521907 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.542599 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.662071 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 17:25:57 crc kubenswrapper[5024]: I1128 17:25:57.662445 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.003119 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j9x4"] Nov 28 17:25:58 crc kubenswrapper[5024]: W1128 17:25:58.013429 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35d4b906_1dc5_46b4_be8f_d9b8873a41ce.slice/crio-538966a2719d14bd072096c96ea196e027121c00a7c50785359b4be4226d7968 WatchSource:0}: Error finding container 538966a2719d14bd072096c96ea196e027121c00a7c50785359b4be4226d7968: Status 404 returned error can't find the container with id 538966a2719d14bd072096c96ea196e027121c00a7c50785359b4be4226d7968 Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.225403 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.283587 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e9856a4-36be-4430-a239-6a83871dd474","Type":"ContainerStarted","Data":"12125e8db7eb6002da74d08541a8bba33419348a59183974570d98f44a5b5765"} Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.297621 5024 generic.go:334] "Generic (PLEG): container finished" podID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerID="5534693ed33518128d085e55b425872ea2992061fc8c869464ac092fe94ba9f2" exitCode=0 Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.297691 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b82a034-95aa-410e-b4ef-f99f9d589588","Type":"ContainerDied","Data":"5534693ed33518128d085e55b425872ea2992061fc8c869464ac092fe94ba9f2"} Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.297755 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b82a034-95aa-410e-b4ef-f99f9d589588","Type":"ContainerDied","Data":"bebaf1c11f8f7c94729039a82e44ed3ecc06008f3769d97d6dd3968151709dcb"} Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.297776 5024 scope.go:117] "RemoveContainer" containerID="f20209f13a620a466492055eabfdadb014087c3360c6c66421c44bdfb80310fb" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.297997 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.303120 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j9x4" event={"ID":"35d4b906-1dc5-46b4-be8f-d9b8873a41ce","Type":"ContainerStarted","Data":"9095178d4a20c639c583bff8ff4f78d062d13df963026e24488790986546d555"} Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.303173 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j9x4" event={"ID":"35d4b906-1dc5-46b4-be8f-d9b8873a41ce","Type":"ContainerStarted","Data":"538966a2719d14bd072096c96ea196e027121c00a7c50785359b4be4226d7968"} Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.326419 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.327480 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-combined-ca-bundle\") pod \"3b82a034-95aa-410e-b4ef-f99f9d589588\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.327529 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b82a034-95aa-410e-b4ef-f99f9d589588-run-httpd\") pod \"3b82a034-95aa-410e-b4ef-f99f9d589588\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.327556 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-scripts\") pod \"3b82a034-95aa-410e-b4ef-f99f9d589588\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.327678 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-config-data\") pod \"3b82a034-95aa-410e-b4ef-f99f9d589588\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.327776 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhkwk\" (UniqueName: \"kubernetes.io/projected/3b82a034-95aa-410e-b4ef-f99f9d589588-kube-api-access-dhkwk\") pod \"3b82a034-95aa-410e-b4ef-f99f9d589588\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.327803 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b82a034-95aa-410e-b4ef-f99f9d589588-log-httpd\") pod \"3b82a034-95aa-410e-b4ef-f99f9d589588\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.328006 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-sg-core-conf-yaml\") pod \"3b82a034-95aa-410e-b4ef-f99f9d589588\" (UID: \"3b82a034-95aa-410e-b4ef-f99f9d589588\") " Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.329608 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b82a034-95aa-410e-b4ef-f99f9d589588-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3b82a034-95aa-410e-b4ef-f99f9d589588" (UID: "3b82a034-95aa-410e-b4ef-f99f9d589588"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.332139 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b82a034-95aa-410e-b4ef-f99f9d589588-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3b82a034-95aa-410e-b4ef-f99f9d589588" (UID: "3b82a034-95aa-410e-b4ef-f99f9d589588"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.335322 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b82a034-95aa-410e-b4ef-f99f9d589588-kube-api-access-dhkwk" (OuterVolumeSpecName: "kube-api-access-dhkwk") pod "3b82a034-95aa-410e-b4ef-f99f9d589588" (UID: "3b82a034-95aa-410e-b4ef-f99f9d589588"). InnerVolumeSpecName "kube-api-access-dhkwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.345232 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-scripts" (OuterVolumeSpecName: "scripts") pod "3b82a034-95aa-410e-b4ef-f99f9d589588" (UID: "3b82a034-95aa-410e-b4ef-f99f9d589588"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.429725 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3b82a034-95aa-410e-b4ef-f99f9d589588" (UID: "3b82a034-95aa-410e-b4ef-f99f9d589588"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.431951 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b82a034-95aa-410e-b4ef-f99f9d589588-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.431973 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.431985 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhkwk\" (UniqueName: \"kubernetes.io/projected/3b82a034-95aa-410e-b4ef-f99f9d589588-kube-api-access-dhkwk\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.431999 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b82a034-95aa-410e-b4ef-f99f9d589588-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.432009 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.565609 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b82a034-95aa-410e-b4ef-f99f9d589588" (UID: "3b82a034-95aa-410e-b4ef-f99f9d589588"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.600951 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-config-data" (OuterVolumeSpecName: "config-data") pod "3b82a034-95aa-410e-b4ef-f99f9d589588" (UID: "3b82a034-95aa-410e-b4ef-f99f9d589588"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.654000 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.654423 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b82a034-95aa-410e-b4ef-f99f9d589588-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.670131 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-7b526"] Nov 28 17:25:58 crc kubenswrapper[5024]: E1128 17:25:58.671640 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="ceilometer-notification-agent" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.671672 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="ceilometer-notification-agent" Nov 28 17:25:58 crc kubenswrapper[5024]: E1128 17:25:58.680202 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="proxy-httpd" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.680221 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="proxy-httpd" Nov 28 17:25:58 crc kubenswrapper[5024]: E1128 17:25:58.680244 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="ceilometer-central-agent" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.680251 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="ceilometer-central-agent" Nov 28 17:25:58 crc kubenswrapper[5024]: E1128 17:25:58.680323 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="sg-core" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.680329 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="sg-core" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.681375 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="ceilometer-notification-agent" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.681409 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="sg-core" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.681436 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="proxy-httpd" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.681454 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" containerName="ceilometer-central-agent" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.682597 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.685846 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.691189 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.249:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.691417 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.249:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.694389 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.716357 5024 scope.go:117] "RemoveContainer" containerID="c3cdbc0a86fbde30665db73ac80ca5c5479876e17ba34133890ad5f082cb1764" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.739333 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-7b526"] Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.746482 5024 scope.go:117] "RemoveContainer" containerID="c5d0dffd9d1e6215caf387a1cad8bfb9acc5c1a6d12dc8fe44028c09db403141" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.757470 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-scripts\") pod \"nova-cell1-cell-mapping-7b526\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.757770 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-config-data\") pod \"nova-cell1-cell-mapping-7b526\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.757824 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7b526\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.757846 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcc24\" (UniqueName: \"kubernetes.io/projected/08d35aa9-bbf5-406f-98c2-7e884f136b29-kube-api-access-rcc24\") pod \"nova-cell1-cell-mapping-7b526\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.780423 5024 scope.go:117] "RemoveContainer" containerID="5534693ed33518128d085e55b425872ea2992061fc8c869464ac092fe94ba9f2" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.816185 5024 scope.go:117] "RemoveContainer" containerID="f20209f13a620a466492055eabfdadb014087c3360c6c66421c44bdfb80310fb" Nov 28 17:25:58 crc kubenswrapper[5024]: E1128 17:25:58.818564 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f20209f13a620a466492055eabfdadb014087c3360c6c66421c44bdfb80310fb\": container with ID starting with f20209f13a620a466492055eabfdadb014087c3360c6c66421c44bdfb80310fb not found: ID does not exist" containerID="f20209f13a620a466492055eabfdadb014087c3360c6c66421c44bdfb80310fb" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.818615 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f20209f13a620a466492055eabfdadb014087c3360c6c66421c44bdfb80310fb"} err="failed to get container status \"f20209f13a620a466492055eabfdadb014087c3360c6c66421c44bdfb80310fb\": rpc error: code = NotFound desc = could not find container \"f20209f13a620a466492055eabfdadb014087c3360c6c66421c44bdfb80310fb\": container with ID starting with f20209f13a620a466492055eabfdadb014087c3360c6c66421c44bdfb80310fb not found: ID does not exist" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.818647 5024 scope.go:117] "RemoveContainer" containerID="c3cdbc0a86fbde30665db73ac80ca5c5479876e17ba34133890ad5f082cb1764" Nov 28 17:25:58 crc kubenswrapper[5024]: E1128 17:25:58.821167 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3cdbc0a86fbde30665db73ac80ca5c5479876e17ba34133890ad5f082cb1764\": container with ID starting with c3cdbc0a86fbde30665db73ac80ca5c5479876e17ba34133890ad5f082cb1764 not found: ID does not exist" containerID="c3cdbc0a86fbde30665db73ac80ca5c5479876e17ba34133890ad5f082cb1764" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.821239 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3cdbc0a86fbde30665db73ac80ca5c5479876e17ba34133890ad5f082cb1764"} err="failed to get container status \"c3cdbc0a86fbde30665db73ac80ca5c5479876e17ba34133890ad5f082cb1764\": rpc error: code = NotFound desc = could not find container \"c3cdbc0a86fbde30665db73ac80ca5c5479876e17ba34133890ad5f082cb1764\": container with ID starting with c3cdbc0a86fbde30665db73ac80ca5c5479876e17ba34133890ad5f082cb1764 not found: ID does not exist" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.821267 5024 scope.go:117] "RemoveContainer" containerID="c5d0dffd9d1e6215caf387a1cad8bfb9acc5c1a6d12dc8fe44028c09db403141" Nov 28 17:25:58 crc kubenswrapper[5024]: E1128 17:25:58.825215 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5d0dffd9d1e6215caf387a1cad8bfb9acc5c1a6d12dc8fe44028c09db403141\": container with ID starting with c5d0dffd9d1e6215caf387a1cad8bfb9acc5c1a6d12dc8fe44028c09db403141 not found: ID does not exist" containerID="c5d0dffd9d1e6215caf387a1cad8bfb9acc5c1a6d12dc8fe44028c09db403141" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.825270 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5d0dffd9d1e6215caf387a1cad8bfb9acc5c1a6d12dc8fe44028c09db403141"} err="failed to get container status \"c5d0dffd9d1e6215caf387a1cad8bfb9acc5c1a6d12dc8fe44028c09db403141\": rpc error: code = NotFound desc = could not find container \"c5d0dffd9d1e6215caf387a1cad8bfb9acc5c1a6d12dc8fe44028c09db403141\": container with ID starting with c5d0dffd9d1e6215caf387a1cad8bfb9acc5c1a6d12dc8fe44028c09db403141 not found: ID does not exist" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.825300 5024 scope.go:117] "RemoveContainer" containerID="5534693ed33518128d085e55b425872ea2992061fc8c869464ac092fe94ba9f2" Nov 28 17:25:58 crc kubenswrapper[5024]: E1128 17:25:58.829598 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5534693ed33518128d085e55b425872ea2992061fc8c869464ac092fe94ba9f2\": container with ID starting with 5534693ed33518128d085e55b425872ea2992061fc8c869464ac092fe94ba9f2 not found: ID does not exist" containerID="5534693ed33518128d085e55b425872ea2992061fc8c869464ac092fe94ba9f2" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.829638 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5534693ed33518128d085e55b425872ea2992061fc8c869464ac092fe94ba9f2"} err="failed to get container status \"5534693ed33518128d085e55b425872ea2992061fc8c869464ac092fe94ba9f2\": rpc error: code = NotFound desc = could not find container \"5534693ed33518128d085e55b425872ea2992061fc8c869464ac092fe94ba9f2\": container with ID starting with 5534693ed33518128d085e55b425872ea2992061fc8c869464ac092fe94ba9f2 not found: ID does not exist" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.859473 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-scripts\") pod \"nova-cell1-cell-mapping-7b526\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.859576 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-config-data\") pod \"nova-cell1-cell-mapping-7b526\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.859610 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7b526\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.859629 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcc24\" (UniqueName: \"kubernetes.io/projected/08d35aa9-bbf5-406f-98c2-7e884f136b29-kube-api-access-rcc24\") pod \"nova-cell1-cell-mapping-7b526\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.866008 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-config-data\") pod \"nova-cell1-cell-mapping-7b526\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.866606 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7b526\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.867421 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-scripts\") pod \"nova-cell1-cell-mapping-7b526\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.877620 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcc24\" (UniqueName: \"kubernetes.io/projected/08d35aa9-bbf5-406f-98c2-7e884f136b29-kube-api-access-rcc24\") pod \"nova-cell1-cell-mapping-7b526\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.945373 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.959723 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.974533 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.978699 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.981507 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.981606 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:25:58 crc kubenswrapper[5024]: I1128 17:25:58.987737 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.017792 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.063441 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.063514 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-config-data\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.063558 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch2kp\" (UniqueName: \"kubernetes.io/projected/b912b68c-d877-472f-8c8d-68f1353ac3a0-kube-api-access-ch2kp\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.063573 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b912b68c-d877-472f-8c8d-68f1353ac3a0-run-httpd\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.063587 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b912b68c-d877-472f-8c8d-68f1353ac3a0-log-httpd\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.063624 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.063757 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-scripts\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.165229 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-config-data\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.165311 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch2kp\" (UniqueName: \"kubernetes.io/projected/b912b68c-d877-472f-8c8d-68f1353ac3a0-kube-api-access-ch2kp\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.165333 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b912b68c-d877-472f-8c8d-68f1353ac3a0-run-httpd\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.165350 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b912b68c-d877-472f-8c8d-68f1353ac3a0-log-httpd\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.165392 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.165862 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-scripts\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.165912 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.170989 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b912b68c-d877-472f-8c8d-68f1353ac3a0-run-httpd\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.171193 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b912b68c-d877-472f-8c8d-68f1353ac3a0-log-httpd\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.179402 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-config-data\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.192420 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.209806 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-scripts\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.210832 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch2kp\" (UniqueName: \"kubernetes.io/projected/b912b68c-d877-472f-8c8d-68f1353ac3a0-kube-api-access-ch2kp\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.214109 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.391243 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.477696 5024 generic.go:334] "Generic (PLEG): container finished" podID="35d4b906-1dc5-46b4-be8f-d9b8873a41ce" containerID="9095178d4a20c639c583bff8ff4f78d062d13df963026e24488790986546d555" exitCode=0 Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.477965 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j9x4" event={"ID":"35d4b906-1dc5-46b4-be8f-d9b8873a41ce","Type":"ContainerDied","Data":"9095178d4a20c639c583bff8ff4f78d062d13df963026e24488790986546d555"} Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.521605 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e9856a4-36be-4430-a239-6a83871dd474","Type":"ContainerStarted","Data":"0d8d11298432d40baba87a5d8e159b7d94777f3cccbcfc09f3dce60aff49aca0"} Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.562675 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.5330085860000002 podStartE2EDuration="5.562650998s" podCreationTimestamp="2025-11-28 17:25:54 +0000 UTC" firstStartedPulling="2025-11-28 17:25:55.103498715 +0000 UTC m=+1657.152419620" lastFinishedPulling="2025-11-28 17:25:58.133141127 +0000 UTC m=+1660.182062032" observedRunningTime="2025-11-28 17:25:59.54290876 +0000 UTC m=+1661.591829685" watchObservedRunningTime="2025-11-28 17:25:59.562650998 +0000 UTC m=+1661.611571903" Nov 28 17:25:59 crc kubenswrapper[5024]: I1128 17:25:59.800913 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-7b526"] Nov 28 17:26:00 crc kubenswrapper[5024]: I1128 17:26:00.139925 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:26:00 crc kubenswrapper[5024]: I1128 17:26:00.514429 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b82a034-95aa-410e-b4ef-f99f9d589588" path="/var/lib/kubelet/pods/3b82a034-95aa-410e-b4ef-f99f9d589588/volumes" Nov 28 17:26:00 crc kubenswrapper[5024]: I1128 17:26:00.535760 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7b526" event={"ID":"08d35aa9-bbf5-406f-98c2-7e884f136b29","Type":"ContainerStarted","Data":"2c7a9cda6a007685e5ac80e5e8141da7c784a50d954ba08517a6a5e5c90f7ec4"} Nov 28 17:26:00 crc kubenswrapper[5024]: I1128 17:26:00.535827 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7b526" event={"ID":"08d35aa9-bbf5-406f-98c2-7e884f136b29","Type":"ContainerStarted","Data":"584af80cbabfdfb64755c9d8cc7848f80507b48079edd11c41eed9c65bfbf1e9"} Nov 28 17:26:00 crc kubenswrapper[5024]: I1128 17:26:00.537975 5024 generic.go:334] "Generic (PLEG): container finished" podID="35d4b906-1dc5-46b4-be8f-d9b8873a41ce" containerID="eefb62ad8d14a2b90bdea0eac90e53d5fdcc43d441ed86e9c2dfb7529909a132" exitCode=0 Nov 28 17:26:00 crc kubenswrapper[5024]: I1128 17:26:00.538264 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j9x4" event={"ID":"35d4b906-1dc5-46b4-be8f-d9b8873a41ce","Type":"ContainerDied","Data":"eefb62ad8d14a2b90bdea0eac90e53d5fdcc43d441ed86e9c2dfb7529909a132"} Nov 28 17:26:00 crc kubenswrapper[5024]: I1128 17:26:00.539746 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b912b68c-d877-472f-8c8d-68f1353ac3a0","Type":"ContainerStarted","Data":"f8918e156386bd0d4ea418a940519630868a8b95f8ecad0a4384740fc12e09ec"} Nov 28 17:26:00 crc kubenswrapper[5024]: I1128 17:26:00.566749 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-7b526" podStartSLOduration=2.566731117 podStartE2EDuration="2.566731117s" podCreationTimestamp="2025-11-28 17:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:26:00.557938955 +0000 UTC m=+1662.606859870" watchObservedRunningTime="2025-11-28 17:26:00.566731117 +0000 UTC m=+1662.615652022" Nov 28 17:26:01 crc kubenswrapper[5024]: I1128 17:26:01.501230 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:26:01 crc kubenswrapper[5024]: E1128 17:26:01.502534 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:26:02 crc kubenswrapper[5024]: I1128 17:26:02.593892 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j9x4" event={"ID":"35d4b906-1dc5-46b4-be8f-d9b8873a41ce","Type":"ContainerStarted","Data":"486a92dd6c28943ba103d4ff05f1ed1b001e23c800730c57eb3d7c899a5e5771"} Nov 28 17:26:02 crc kubenswrapper[5024]: I1128 17:26:02.602566 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b912b68c-d877-472f-8c8d-68f1353ac3a0","Type":"ContainerStarted","Data":"04429f7cabbc02698fbb0da96ec0f96adb3ac4bb72a4313118de96fcbfeb32e6"} Nov 28 17:26:02 crc kubenswrapper[5024]: I1128 17:26:02.624981 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 17:26:02 crc kubenswrapper[5024]: I1128 17:26:02.625491 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 17:26:02 crc kubenswrapper[5024]: I1128 17:26:02.625278 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5j9x4" podStartSLOduration=2.676841537 podStartE2EDuration="5.625258706s" podCreationTimestamp="2025-11-28 17:25:57 +0000 UTC" firstStartedPulling="2025-11-28 17:25:58.305425835 +0000 UTC m=+1660.354346740" lastFinishedPulling="2025-11-28 17:26:01.253843004 +0000 UTC m=+1663.302763909" observedRunningTime="2025-11-28 17:26:02.624536065 +0000 UTC m=+1664.673456980" watchObservedRunningTime="2025-11-28 17:26:02.625258706 +0000 UTC m=+1664.674179611" Nov 28 17:26:03 crc kubenswrapper[5024]: I1128 17:26:03.616923 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b912b68c-d877-472f-8c8d-68f1353ac3a0","Type":"ContainerStarted","Data":"ae11a0410cd4b8e465d555568845c1d900d38d8a3eb632674eea2086e8a26178"} Nov 28 17:26:03 crc kubenswrapper[5024]: I1128 17:26:03.697369 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b002b3a3-108d-4b46-9457-e96a43d82367" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.252:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:26:03 crc kubenswrapper[5024]: I1128 17:26:03.698456 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b002b3a3-108d-4b46-9457-e96a43d82367" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 17:26:04 crc kubenswrapper[5024]: I1128 17:26:04.635300 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b912b68c-d877-472f-8c8d-68f1353ac3a0","Type":"ContainerStarted","Data":"ba86c445cf22f980125961ce40e90ded99dd6b5d05e59e90dc3c59cd97d1246d"} Nov 28 17:26:06 crc kubenswrapper[5024]: I1128 17:26:06.727252 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b912b68c-d877-472f-8c8d-68f1353ac3a0","Type":"ContainerStarted","Data":"5de478df90ea4389390242e5b868db719cfb6a30e03e75c2867d0200cdacfd01"} Nov 28 17:26:06 crc kubenswrapper[5024]: I1128 17:26:06.727931 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:26:06 crc kubenswrapper[5024]: I1128 17:26:06.774926 5024 generic.go:334] "Generic (PLEG): container finished" podID="08d35aa9-bbf5-406f-98c2-7e884f136b29" containerID="2c7a9cda6a007685e5ac80e5e8141da7c784a50d954ba08517a6a5e5c90f7ec4" exitCode=0 Nov 28 17:26:06 crc kubenswrapper[5024]: I1128 17:26:06.774997 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7b526" event={"ID":"08d35aa9-bbf5-406f-98c2-7e884f136b29","Type":"ContainerDied","Data":"2c7a9cda6a007685e5ac80e5e8141da7c784a50d954ba08517a6a5e5c90f7ec4"} Nov 28 17:26:06 crc kubenswrapper[5024]: I1128 17:26:06.786380 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.381939572 podStartE2EDuration="8.786359016s" podCreationTimestamp="2025-11-28 17:25:58 +0000 UTC" firstStartedPulling="2025-11-28 17:26:00.129700174 +0000 UTC m=+1662.178621079" lastFinishedPulling="2025-11-28 17:26:05.534119608 +0000 UTC m=+1667.583040523" observedRunningTime="2025-11-28 17:26:06.777332367 +0000 UTC m=+1668.826253272" watchObservedRunningTime="2025-11-28 17:26:06.786359016 +0000 UTC m=+1668.835279921" Nov 28 17:26:07 crc kubenswrapper[5024]: I1128 17:26:07.504394 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:26:07 crc kubenswrapper[5024]: I1128 17:26:07.504751 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:26:07 crc kubenswrapper[5024]: I1128 17:26:07.582919 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:26:07 crc kubenswrapper[5024]: I1128 17:26:07.679732 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 17:26:07 crc kubenswrapper[5024]: I1128 17:26:07.691735 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 17:26:07 crc kubenswrapper[5024]: I1128 17:26:07.712551 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 17:26:07 crc kubenswrapper[5024]: I1128 17:26:07.802470 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 17:26:07 crc kubenswrapper[5024]: I1128 17:26:07.957235 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.114700 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j9x4"] Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.658606 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.807348 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcc24\" (UniqueName: \"kubernetes.io/projected/08d35aa9-bbf5-406f-98c2-7e884f136b29-kube-api-access-rcc24\") pod \"08d35aa9-bbf5-406f-98c2-7e884f136b29\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.807508 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-combined-ca-bundle\") pod \"08d35aa9-bbf5-406f-98c2-7e884f136b29\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.807621 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-config-data\") pod \"08d35aa9-bbf5-406f-98c2-7e884f136b29\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.807689 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-scripts\") pod \"08d35aa9-bbf5-406f-98c2-7e884f136b29\" (UID: \"08d35aa9-bbf5-406f-98c2-7e884f136b29\") " Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.812533 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7b526" event={"ID":"08d35aa9-bbf5-406f-98c2-7e884f136b29","Type":"ContainerDied","Data":"584af80cbabfdfb64755c9d8cc7848f80507b48079edd11c41eed9c65bfbf1e9"} Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.812702 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="584af80cbabfdfb64755c9d8cc7848f80507b48079edd11c41eed9c65bfbf1e9" Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.812712 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7b526" Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.812591 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-scripts" (OuterVolumeSpecName: "scripts") pod "08d35aa9-bbf5-406f-98c2-7e884f136b29" (UID: "08d35aa9-bbf5-406f-98c2-7e884f136b29"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.820437 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d35aa9-bbf5-406f-98c2-7e884f136b29-kube-api-access-rcc24" (OuterVolumeSpecName: "kube-api-access-rcc24") pod "08d35aa9-bbf5-406f-98c2-7e884f136b29" (UID: "08d35aa9-bbf5-406f-98c2-7e884f136b29"). InnerVolumeSpecName "kube-api-access-rcc24". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.866116 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-config-data" (OuterVolumeSpecName: "config-data") pod "08d35aa9-bbf5-406f-98c2-7e884f136b29" (UID: "08d35aa9-bbf5-406f-98c2-7e884f136b29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.868412 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08d35aa9-bbf5-406f-98c2-7e884f136b29" (UID: "08d35aa9-bbf5-406f-98c2-7e884f136b29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.911835 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.911879 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.911894 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcc24\" (UniqueName: \"kubernetes.io/projected/08d35aa9-bbf5-406f-98c2-7e884f136b29-kube-api-access-rcc24\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.911908 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d35aa9-bbf5-406f-98c2-7e884f136b29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.950384 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.950637 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b002b3a3-108d-4b46-9457-e96a43d82367" containerName="nova-api-log" containerID="cri-o://f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437" gracePeriod=30 Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.951240 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b002b3a3-108d-4b46-9457-e96a43d82367" containerName="nova-api-api" containerID="cri-o://2e4b1a71b340e5c98a7bd1c2d986286fc45a14d197b1721228542a01c97a8c3b" gracePeriod=30 Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.987468 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:26:08 crc kubenswrapper[5024]: I1128 17:26:08.987729 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a2c0c8c6-e4ff-490b-94c5-772a7066c4db" containerName="nova-scheduler-scheduler" containerID="cri-o://a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59" gracePeriod=30 Nov 28 17:26:09 crc kubenswrapper[5024]: I1128 17:26:09.005713 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:26:09 crc kubenswrapper[5024]: E1128 17:26:09.127801 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 17:26:09 crc kubenswrapper[5024]: E1128 17:26:09.129526 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 17:26:09 crc kubenswrapper[5024]: E1128 17:26:09.132625 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 17:26:09 crc kubenswrapper[5024]: E1128 17:26:09.132729 5024 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a2c0c8c6-e4ff-490b-94c5-772a7066c4db" containerName="nova-scheduler-scheduler" Nov 28 17:26:09 crc kubenswrapper[5024]: E1128 17:26:09.242476 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08d35aa9_bbf5_406f_98c2_7e884f136b29.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb002b3a3_108d_4b46_9457_e96a43d82367.slice/crio-conmon-f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb002b3a3_108d_4b46_9457_e96a43d82367.slice/crio-f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08d35aa9_bbf5_406f_98c2_7e884f136b29.slice/crio-584af80cbabfdfb64755c9d8cc7848f80507b48079edd11c41eed9c65bfbf1e9\": RecentStats: unable to find data in memory cache]" Nov 28 17:26:09 crc kubenswrapper[5024]: I1128 17:26:09.824190 5024 generic.go:334] "Generic (PLEG): container finished" podID="b002b3a3-108d-4b46-9457-e96a43d82367" containerID="f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437" exitCode=143 Nov 28 17:26:09 crc kubenswrapper[5024]: I1128 17:26:09.825214 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5j9x4" podUID="35d4b906-1dc5-46b4-be8f-d9b8873a41ce" containerName="registry-server" containerID="cri-o://486a92dd6c28943ba103d4ff05f1ed1b001e23c800730c57eb3d7c899a5e5771" gracePeriod=2 Nov 28 17:26:09 crc kubenswrapper[5024]: I1128 17:26:09.825625 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b002b3a3-108d-4b46-9457-e96a43d82367","Type":"ContainerDied","Data":"f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437"} Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.553533 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.684934 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-utilities\") pod \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\" (UID: \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\") " Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.685098 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-catalog-content\") pod \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\" (UID: \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\") " Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.685187 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsd7z\" (UniqueName: \"kubernetes.io/projected/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-kube-api-access-tsd7z\") pod \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\" (UID: \"35d4b906-1dc5-46b4-be8f-d9b8873a41ce\") " Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.685775 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-utilities" (OuterVolumeSpecName: "utilities") pod "35d4b906-1dc5-46b4-be8f-d9b8873a41ce" (UID: "35d4b906-1dc5-46b4-be8f-d9b8873a41ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.689964 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.697277 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-kube-api-access-tsd7z" (OuterVolumeSpecName: "kube-api-access-tsd7z") pod "35d4b906-1dc5-46b4-be8f-d9b8873a41ce" (UID: "35d4b906-1dc5-46b4-be8f-d9b8873a41ce"). InnerVolumeSpecName "kube-api-access-tsd7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.717635 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35d4b906-1dc5-46b4-be8f-d9b8873a41ce" (UID: "35d4b906-1dc5-46b4-be8f-d9b8873a41ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.792094 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.792249 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsd7z\" (UniqueName: \"kubernetes.io/projected/35d4b906-1dc5-46b4-be8f-d9b8873a41ce-kube-api-access-tsd7z\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.838219 5024 generic.go:334] "Generic (PLEG): container finished" podID="35d4b906-1dc5-46b4-be8f-d9b8873a41ce" containerID="486a92dd6c28943ba103d4ff05f1ed1b001e23c800730c57eb3d7c899a5e5771" exitCode=0 Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.838436 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" containerName="nova-metadata-log" containerID="cri-o://53a9bed32b2e554c79badda1d74c143f4c555505985f67cac9b42a4a9ace201d" gracePeriod=30 Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.838726 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5j9x4" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.850700 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j9x4" event={"ID":"35d4b906-1dc5-46b4-be8f-d9b8873a41ce","Type":"ContainerDied","Data":"486a92dd6c28943ba103d4ff05f1ed1b001e23c800730c57eb3d7c899a5e5771"} Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.850760 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j9x4" event={"ID":"35d4b906-1dc5-46b4-be8f-d9b8873a41ce","Type":"ContainerDied","Data":"538966a2719d14bd072096c96ea196e027121c00a7c50785359b4be4226d7968"} Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.850778 5024 scope.go:117] "RemoveContainer" containerID="486a92dd6c28943ba103d4ff05f1ed1b001e23c800730c57eb3d7c899a5e5771" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.850930 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" containerName="nova-metadata-metadata" containerID="cri-o://20d505276d662458b1b835a9d05cc17e31fd143dcf977c0e52b1c9d6d6df22a1" gracePeriod=30 Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.883480 5024 scope.go:117] "RemoveContainer" containerID="eefb62ad8d14a2b90bdea0eac90e53d5fdcc43d441ed86e9c2dfb7529909a132" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.894087 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j9x4"] Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.910754 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j9x4"] Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.912233 5024 scope.go:117] "RemoveContainer" containerID="9095178d4a20c639c583bff8ff4f78d062d13df963026e24488790986546d555" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.959385 5024 scope.go:117] "RemoveContainer" containerID="486a92dd6c28943ba103d4ff05f1ed1b001e23c800730c57eb3d7c899a5e5771" Nov 28 17:26:10 crc kubenswrapper[5024]: E1128 17:26:10.960277 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"486a92dd6c28943ba103d4ff05f1ed1b001e23c800730c57eb3d7c899a5e5771\": container with ID starting with 486a92dd6c28943ba103d4ff05f1ed1b001e23c800730c57eb3d7c899a5e5771 not found: ID does not exist" containerID="486a92dd6c28943ba103d4ff05f1ed1b001e23c800730c57eb3d7c899a5e5771" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.960321 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486a92dd6c28943ba103d4ff05f1ed1b001e23c800730c57eb3d7c899a5e5771"} err="failed to get container status \"486a92dd6c28943ba103d4ff05f1ed1b001e23c800730c57eb3d7c899a5e5771\": rpc error: code = NotFound desc = could not find container \"486a92dd6c28943ba103d4ff05f1ed1b001e23c800730c57eb3d7c899a5e5771\": container with ID starting with 486a92dd6c28943ba103d4ff05f1ed1b001e23c800730c57eb3d7c899a5e5771 not found: ID does not exist" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.960362 5024 scope.go:117] "RemoveContainer" containerID="eefb62ad8d14a2b90bdea0eac90e53d5fdcc43d441ed86e9c2dfb7529909a132" Nov 28 17:26:10 crc kubenswrapper[5024]: E1128 17:26:10.960768 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eefb62ad8d14a2b90bdea0eac90e53d5fdcc43d441ed86e9c2dfb7529909a132\": container with ID starting with eefb62ad8d14a2b90bdea0eac90e53d5fdcc43d441ed86e9c2dfb7529909a132 not found: ID does not exist" containerID="eefb62ad8d14a2b90bdea0eac90e53d5fdcc43d441ed86e9c2dfb7529909a132" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.960809 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eefb62ad8d14a2b90bdea0eac90e53d5fdcc43d441ed86e9c2dfb7529909a132"} err="failed to get container status \"eefb62ad8d14a2b90bdea0eac90e53d5fdcc43d441ed86e9c2dfb7529909a132\": rpc error: code = NotFound desc = could not find container \"eefb62ad8d14a2b90bdea0eac90e53d5fdcc43d441ed86e9c2dfb7529909a132\": container with ID starting with eefb62ad8d14a2b90bdea0eac90e53d5fdcc43d441ed86e9c2dfb7529909a132 not found: ID does not exist" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.960824 5024 scope.go:117] "RemoveContainer" containerID="9095178d4a20c639c583bff8ff4f78d062d13df963026e24488790986546d555" Nov 28 17:26:10 crc kubenswrapper[5024]: E1128 17:26:10.961104 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9095178d4a20c639c583bff8ff4f78d062d13df963026e24488790986546d555\": container with ID starting with 9095178d4a20c639c583bff8ff4f78d062d13df963026e24488790986546d555 not found: ID does not exist" containerID="9095178d4a20c639c583bff8ff4f78d062d13df963026e24488790986546d555" Nov 28 17:26:10 crc kubenswrapper[5024]: I1128 17:26:10.961190 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9095178d4a20c639c583bff8ff4f78d062d13df963026e24488790986546d555"} err="failed to get container status \"9095178d4a20c639c583bff8ff4f78d062d13df963026e24488790986546d555\": rpc error: code = NotFound desc = could not find container \"9095178d4a20c639c583bff8ff4f78d062d13df963026e24488790986546d555\": container with ID starting with 9095178d4a20c639c583bff8ff4f78d062d13df963026e24488790986546d555 not found: ID does not exist" Nov 28 17:26:11 crc kubenswrapper[5024]: I1128 17:26:11.852548 5024 generic.go:334] "Generic (PLEG): container finished" podID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" containerID="53a9bed32b2e554c79badda1d74c143f4c555505985f67cac9b42a4a9ace201d" exitCode=143 Nov 28 17:26:11 crc kubenswrapper[5024]: I1128 17:26:11.852623 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad","Type":"ContainerDied","Data":"53a9bed32b2e554c79badda1d74c143f4c555505985f67cac9b42a4a9ace201d"} Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.498096 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:26:12 crc kubenswrapper[5024]: E1128 17:26:12.498766 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.516471 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35d4b906-1dc5-46b4-be8f-d9b8873a41ce" path="/var/lib/kubelet/pods/35d4b906-1dc5-46b4-be8f-d9b8873a41ce/volumes" Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.659601 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.769318 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfwr2\" (UniqueName: \"kubernetes.io/projected/b002b3a3-108d-4b46-9457-e96a43d82367-kube-api-access-nfwr2\") pod \"b002b3a3-108d-4b46-9457-e96a43d82367\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.769478 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-public-tls-certs\") pod \"b002b3a3-108d-4b46-9457-e96a43d82367\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.769501 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b002b3a3-108d-4b46-9457-e96a43d82367-logs\") pod \"b002b3a3-108d-4b46-9457-e96a43d82367\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.769540 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-internal-tls-certs\") pod \"b002b3a3-108d-4b46-9457-e96a43d82367\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.769641 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-combined-ca-bundle\") pod \"b002b3a3-108d-4b46-9457-e96a43d82367\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.770165 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b002b3a3-108d-4b46-9457-e96a43d82367-logs" (OuterVolumeSpecName: "logs") pod "b002b3a3-108d-4b46-9457-e96a43d82367" (UID: "b002b3a3-108d-4b46-9457-e96a43d82367"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.770352 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-config-data\") pod \"b002b3a3-108d-4b46-9457-e96a43d82367\" (UID: \"b002b3a3-108d-4b46-9457-e96a43d82367\") " Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.771011 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b002b3a3-108d-4b46-9457-e96a43d82367-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.777465 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b002b3a3-108d-4b46-9457-e96a43d82367-kube-api-access-nfwr2" (OuterVolumeSpecName: "kube-api-access-nfwr2") pod "b002b3a3-108d-4b46-9457-e96a43d82367" (UID: "b002b3a3-108d-4b46-9457-e96a43d82367"). InnerVolumeSpecName "kube-api-access-nfwr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.915991 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfwr2\" (UniqueName: \"kubernetes.io/projected/b002b3a3-108d-4b46-9457-e96a43d82367-kube-api-access-nfwr2\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.941854 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-config-data" (OuterVolumeSpecName: "config-data") pod "b002b3a3-108d-4b46-9457-e96a43d82367" (UID: "b002b3a3-108d-4b46-9457-e96a43d82367"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.963700 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b002b3a3-108d-4b46-9457-e96a43d82367" (UID: "b002b3a3-108d-4b46-9457-e96a43d82367"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.980798 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b002b3a3-108d-4b46-9457-e96a43d82367" (UID: "b002b3a3-108d-4b46-9457-e96a43d82367"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.980949 5024 generic.go:334] "Generic (PLEG): container finished" podID="b002b3a3-108d-4b46-9457-e96a43d82367" containerID="2e4b1a71b340e5c98a7bd1c2d986286fc45a14d197b1721228542a01c97a8c3b" exitCode=0 Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.980991 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b002b3a3-108d-4b46-9457-e96a43d82367","Type":"ContainerDied","Data":"2e4b1a71b340e5c98a7bd1c2d986286fc45a14d197b1721228542a01c97a8c3b"} Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.981059 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b002b3a3-108d-4b46-9457-e96a43d82367","Type":"ContainerDied","Data":"00dc659b8ebe597a0f5e9acee2a17ad3dbfa911daa73580c6bbeee3afb1d2b50"} Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.981083 5024 scope.go:117] "RemoveContainer" containerID="2e4b1a71b340e5c98a7bd1c2d986286fc45a14d197b1721228542a01c97a8c3b" Nov 28 17:26:12 crc kubenswrapper[5024]: I1128 17:26:12.981275 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.002413 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b002b3a3-108d-4b46-9457-e96a43d82367" (UID: "b002b3a3-108d-4b46-9457-e96a43d82367"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.027464 5024 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.027509 5024 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.027523 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.027533 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b002b3a3-108d-4b46-9457-e96a43d82367-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.059622 5024 scope.go:117] "RemoveContainer" containerID="f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.083317 5024 scope.go:117] "RemoveContainer" containerID="2e4b1a71b340e5c98a7bd1c2d986286fc45a14d197b1721228542a01c97a8c3b" Nov 28 17:26:13 crc kubenswrapper[5024]: E1128 17:26:13.084199 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4b1a71b340e5c98a7bd1c2d986286fc45a14d197b1721228542a01c97a8c3b\": container with ID starting with 2e4b1a71b340e5c98a7bd1c2d986286fc45a14d197b1721228542a01c97a8c3b not found: ID does not exist" containerID="2e4b1a71b340e5c98a7bd1c2d986286fc45a14d197b1721228542a01c97a8c3b" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.084262 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4b1a71b340e5c98a7bd1c2d986286fc45a14d197b1721228542a01c97a8c3b"} err="failed to get container status \"2e4b1a71b340e5c98a7bd1c2d986286fc45a14d197b1721228542a01c97a8c3b\": rpc error: code = NotFound desc = could not find container \"2e4b1a71b340e5c98a7bd1c2d986286fc45a14d197b1721228542a01c97a8c3b\": container with ID starting with 2e4b1a71b340e5c98a7bd1c2d986286fc45a14d197b1721228542a01c97a8c3b not found: ID does not exist" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.084295 5024 scope.go:117] "RemoveContainer" containerID="f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437" Nov 28 17:26:13 crc kubenswrapper[5024]: E1128 17:26:13.084616 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437\": container with ID starting with f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437 not found: ID does not exist" containerID="f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.084721 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437"} err="failed to get container status \"f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437\": rpc error: code = NotFound desc = could not find container \"f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437\": container with ID starting with f23294cb5c730dc8d30c57e88a2e23cefb9a4ba5b98aa6b241e75ca5803f0437 not found: ID does not exist" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.336648 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.358136 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.372233 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 17:26:13 crc kubenswrapper[5024]: E1128 17:26:13.372900 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35d4b906-1dc5-46b4-be8f-d9b8873a41ce" containerName="extract-content" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.372927 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d4b906-1dc5-46b4-be8f-d9b8873a41ce" containerName="extract-content" Nov 28 17:26:13 crc kubenswrapper[5024]: E1128 17:26:13.372940 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d35aa9-bbf5-406f-98c2-7e884f136b29" containerName="nova-manage" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.372947 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d35aa9-bbf5-406f-98c2-7e884f136b29" containerName="nova-manage" Nov 28 17:26:13 crc kubenswrapper[5024]: E1128 17:26:13.372960 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35d4b906-1dc5-46b4-be8f-d9b8873a41ce" containerName="registry-server" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.372966 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d4b906-1dc5-46b4-be8f-d9b8873a41ce" containerName="registry-server" Nov 28 17:26:13 crc kubenswrapper[5024]: E1128 17:26:13.372987 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b002b3a3-108d-4b46-9457-e96a43d82367" containerName="nova-api-api" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.372994 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b002b3a3-108d-4b46-9457-e96a43d82367" containerName="nova-api-api" Nov 28 17:26:13 crc kubenswrapper[5024]: E1128 17:26:13.373009 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35d4b906-1dc5-46b4-be8f-d9b8873a41ce" containerName="extract-utilities" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.373015 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d4b906-1dc5-46b4-be8f-d9b8873a41ce" containerName="extract-utilities" Nov 28 17:26:13 crc kubenswrapper[5024]: E1128 17:26:13.373047 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b002b3a3-108d-4b46-9457-e96a43d82367" containerName="nova-api-log" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.373053 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b002b3a3-108d-4b46-9457-e96a43d82367" containerName="nova-api-log" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.373323 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="35d4b906-1dc5-46b4-be8f-d9b8873a41ce" containerName="registry-server" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.373343 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d35aa9-bbf5-406f-98c2-7e884f136b29" containerName="nova-manage" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.373379 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b002b3a3-108d-4b46-9457-e96a43d82367" containerName="nova-api-log" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.373405 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b002b3a3-108d-4b46-9457-e96a43d82367" containerName="nova-api-api" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.374826 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.378162 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.378403 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.379676 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.402344 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.447432 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxc27\" (UniqueName: \"kubernetes.io/projected/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-kube-api-access-gxc27\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.447489 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.447540 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-config-data\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.447589 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-public-tls-certs\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.447912 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.448102 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-logs\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.550841 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-logs\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.551645 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxc27\" (UniqueName: \"kubernetes.io/projected/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-kube-api-access-gxc27\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.551684 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.551738 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-config-data\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.551795 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-public-tls-certs\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.552004 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.552319 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-logs\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.558254 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-public-tls-certs\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.558350 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.558860 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-config-data\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.559208 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.571536 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxc27\" (UniqueName: \"kubernetes.io/projected/9de4afa0-2f07-41a3-bf8c-a3b3cd056922-kube-api-access-gxc27\") pod \"nova-api-0\" (UID: \"9de4afa0-2f07-41a3-bf8c-a3b3cd056922\") " pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.698803 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.702335 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.757876 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv7h5\" (UniqueName: \"kubernetes.io/projected/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-kube-api-access-jv7h5\") pod \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\" (UID: \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\") " Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.758310 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-combined-ca-bundle\") pod \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\" (UID: \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\") " Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.758445 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-config-data\") pod \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\" (UID: \"a2c0c8c6-e4ff-490b-94c5-772a7066c4db\") " Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.761958 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-kube-api-access-jv7h5" (OuterVolumeSpecName: "kube-api-access-jv7h5") pod "a2c0c8c6-e4ff-490b-94c5-772a7066c4db" (UID: "a2c0c8c6-e4ff-490b-94c5-772a7066c4db"). InnerVolumeSpecName "kube-api-access-jv7h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.800114 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2c0c8c6-e4ff-490b-94c5-772a7066c4db" (UID: "a2c0c8c6-e4ff-490b-94c5-772a7066c4db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.826679 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-config-data" (OuterVolumeSpecName: "config-data") pod "a2c0c8c6-e4ff-490b-94c5-772a7066c4db" (UID: "a2c0c8c6-e4ff-490b-94c5-772a7066c4db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.864626 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv7h5\" (UniqueName: \"kubernetes.io/projected/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-kube-api-access-jv7h5\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.864666 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:13 crc kubenswrapper[5024]: I1128 17:26:13.864682 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c0c8c6-e4ff-490b-94c5-772a7066c4db-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.054529 5024 generic.go:334] "Generic (PLEG): container finished" podID="a2c0c8c6-e4ff-490b-94c5-772a7066c4db" containerID="a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59" exitCode=0 Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.054589 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2c0c8c6-e4ff-490b-94c5-772a7066c4db","Type":"ContainerDied","Data":"a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59"} Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.054626 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2c0c8c6-e4ff-490b-94c5-772a7066c4db","Type":"ContainerDied","Data":"92f97510784ef041436345d9b5a5f0a57cf507796f5a977149c68f2be4d0a0bb"} Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.054648 5024 scope.go:117] "RemoveContainer" containerID="a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.054857 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.148403 5024 scope.go:117] "RemoveContainer" containerID="a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59" Nov 28 17:26:14 crc kubenswrapper[5024]: E1128 17:26:14.163237 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59\": container with ID starting with a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59 not found: ID does not exist" containerID="a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.163307 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59"} err="failed to get container status \"a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59\": rpc error: code = NotFound desc = could not find container \"a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59\": container with ID starting with a91e86c21bdf66203598f346421c6bb47c6d0c8246e185df870cfe4082e48e59 not found: ID does not exist" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.225367 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.248369 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.265529 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.249:8775/\": read tcp 10.217.0.2:36138->10.217.0.249:8775: read: connection reset by peer" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.265635 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.249:8775/\": read tcp 10.217.0.2:36154->10.217.0.249:8775: read: connection reset by peer" Nov 28 17:26:14 crc kubenswrapper[5024]: W1128 17:26:14.282132 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9de4afa0_2f07_41a3_bf8c_a3b3cd056922.slice/crio-e2ab070bd56bf9b3573b4618acaee8a584250c3e338869a4780d19b1180ca34b WatchSource:0}: Error finding container e2ab070bd56bf9b3573b4618acaee8a584250c3e338869a4780d19b1180ca34b: Status 404 returned error can't find the container with id e2ab070bd56bf9b3573b4618acaee8a584250c3e338869a4780d19b1180ca34b Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.290531 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:26:14 crc kubenswrapper[5024]: E1128 17:26:14.291185 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c0c8c6-e4ff-490b-94c5-772a7066c4db" containerName="nova-scheduler-scheduler" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.291209 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c0c8c6-e4ff-490b-94c5-772a7066c4db" containerName="nova-scheduler-scheduler" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.291445 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c0c8c6-e4ff-490b-94c5-772a7066c4db" containerName="nova-scheduler-scheduler" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.292577 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.372417 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.398767 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e558c904-f9dd-4fe7-8a76-80935850c018-config-data\") pod \"nova-scheduler-0\" (UID: \"e558c904-f9dd-4fe7-8a76-80935850c018\") " pod="openstack/nova-scheduler-0" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.398851 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j77lb\" (UniqueName: \"kubernetes.io/projected/e558c904-f9dd-4fe7-8a76-80935850c018-kube-api-access-j77lb\") pod \"nova-scheduler-0\" (UID: \"e558c904-f9dd-4fe7-8a76-80935850c018\") " pod="openstack/nova-scheduler-0" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.399368 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e558c904-f9dd-4fe7-8a76-80935850c018-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e558c904-f9dd-4fe7-8a76-80935850c018\") " pod="openstack/nova-scheduler-0" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.429378 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.469013 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.501501 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e558c904-f9dd-4fe7-8a76-80935850c018-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e558c904-f9dd-4fe7-8a76-80935850c018\") " pod="openstack/nova-scheduler-0" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.501736 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e558c904-f9dd-4fe7-8a76-80935850c018-config-data\") pod \"nova-scheduler-0\" (UID: \"e558c904-f9dd-4fe7-8a76-80935850c018\") " pod="openstack/nova-scheduler-0" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.501773 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j77lb\" (UniqueName: \"kubernetes.io/projected/e558c904-f9dd-4fe7-8a76-80935850c018-kube-api-access-j77lb\") pod \"nova-scheduler-0\" (UID: \"e558c904-f9dd-4fe7-8a76-80935850c018\") " pod="openstack/nova-scheduler-0" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.511875 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e558c904-f9dd-4fe7-8a76-80935850c018-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e558c904-f9dd-4fe7-8a76-80935850c018\") " pod="openstack/nova-scheduler-0" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.511956 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e558c904-f9dd-4fe7-8a76-80935850c018-config-data\") pod \"nova-scheduler-0\" (UID: \"e558c904-f9dd-4fe7-8a76-80935850c018\") " pod="openstack/nova-scheduler-0" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.528361 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c0c8c6-e4ff-490b-94c5-772a7066c4db" path="/var/lib/kubelet/pods/a2c0c8c6-e4ff-490b-94c5-772a7066c4db/volumes" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.529978 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b002b3a3-108d-4b46-9457-e96a43d82367" path="/var/lib/kubelet/pods/b002b3a3-108d-4b46-9457-e96a43d82367/volumes" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.531279 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j77lb\" (UniqueName: \"kubernetes.io/projected/e558c904-f9dd-4fe7-8a76-80935850c018-kube-api-access-j77lb\") pod \"nova-scheduler-0\" (UID: \"e558c904-f9dd-4fe7-8a76-80935850c018\") " pod="openstack/nova-scheduler-0" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.616649 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:26:14 crc kubenswrapper[5024]: E1128 17:26:14.645921 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddcbee94_d1aa_4cc0_a8ff_7bd9fb4e03ad.slice/crio-conmon-20d505276d662458b1b835a9d05cc17e31fd143dcf977c0e52b1c9d6d6df22a1.scope\": RecentStats: unable to find data in memory cache]" Nov 28 17:26:14 crc kubenswrapper[5024]: I1128 17:26:14.896008 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.013562 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-combined-ca-bundle\") pod \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.014279 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-config-data\") pod \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.014414 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-logs\") pod \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.014468 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-nova-metadata-tls-certs\") pod \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.014539 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nblv4\" (UniqueName: \"kubernetes.io/projected/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-kube-api-access-nblv4\") pod \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\" (UID: \"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad\") " Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.017094 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-logs" (OuterVolumeSpecName: "logs") pod "ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" (UID: "ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.024182 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-kube-api-access-nblv4" (OuterVolumeSpecName: "kube-api-access-nblv4") pod "ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" (UID: "ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad"). InnerVolumeSpecName "kube-api-access-nblv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.056588 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" (UID: "ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.081850 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9de4afa0-2f07-41a3-bf8c-a3b3cd056922","Type":"ContainerStarted","Data":"31ca4955b18fd90c0b65d56cdb2b22e4cfb3af9498c9a9f2c3aa791a4f23ee00"} Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.082913 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9de4afa0-2f07-41a3-bf8c-a3b3cd056922","Type":"ContainerStarted","Data":"e2ab070bd56bf9b3573b4618acaee8a584250c3e338869a4780d19b1180ca34b"} Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.082223 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-config-data" (OuterVolumeSpecName: "config-data") pod "ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" (UID: "ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.094551 5024 generic.go:334] "Generic (PLEG): container finished" podID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" containerID="20d505276d662458b1b835a9d05cc17e31fd143dcf977c0e52b1c9d6d6df22a1" exitCode=0 Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.094742 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.095581 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad","Type":"ContainerDied","Data":"20d505276d662458b1b835a9d05cc17e31fd143dcf977c0e52b1c9d6d6df22a1"} Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.095621 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad","Type":"ContainerDied","Data":"2fddd1a6f862510026c152c8d3546aa5693d285e4e9637aa6a1c968211e5c34b"} Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.095642 5024 scope.go:117] "RemoveContainer" containerID="20d505276d662458b1b835a9d05cc17e31fd143dcf977c0e52b1c9d6d6df22a1" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.117699 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.117732 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.117742 5024 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.117750 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nblv4\" (UniqueName: \"kubernetes.io/projected/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-kube-api-access-nblv4\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.123623 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" (UID: "ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.144408 5024 scope.go:117] "RemoveContainer" containerID="53a9bed32b2e554c79badda1d74c143f4c555505985f67cac9b42a4a9ace201d" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.177629 5024 scope.go:117] "RemoveContainer" containerID="20d505276d662458b1b835a9d05cc17e31fd143dcf977c0e52b1c9d6d6df22a1" Nov 28 17:26:15 crc kubenswrapper[5024]: E1128 17:26:15.178251 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20d505276d662458b1b835a9d05cc17e31fd143dcf977c0e52b1c9d6d6df22a1\": container with ID starting with 20d505276d662458b1b835a9d05cc17e31fd143dcf977c0e52b1c9d6d6df22a1 not found: ID does not exist" containerID="20d505276d662458b1b835a9d05cc17e31fd143dcf977c0e52b1c9d6d6df22a1" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.178298 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d505276d662458b1b835a9d05cc17e31fd143dcf977c0e52b1c9d6d6df22a1"} err="failed to get container status \"20d505276d662458b1b835a9d05cc17e31fd143dcf977c0e52b1c9d6d6df22a1\": rpc error: code = NotFound desc = could not find container \"20d505276d662458b1b835a9d05cc17e31fd143dcf977c0e52b1c9d6d6df22a1\": container with ID starting with 20d505276d662458b1b835a9d05cc17e31fd143dcf977c0e52b1c9d6d6df22a1 not found: ID does not exist" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.178330 5024 scope.go:117] "RemoveContainer" containerID="53a9bed32b2e554c79badda1d74c143f4c555505985f67cac9b42a4a9ace201d" Nov 28 17:26:15 crc kubenswrapper[5024]: E1128 17:26:15.178703 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53a9bed32b2e554c79badda1d74c143f4c555505985f67cac9b42a4a9ace201d\": container with ID starting with 53a9bed32b2e554c79badda1d74c143f4c555505985f67cac9b42a4a9ace201d not found: ID does not exist" containerID="53a9bed32b2e554c79badda1d74c143f4c555505985f67cac9b42a4a9ace201d" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.178853 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53a9bed32b2e554c79badda1d74c143f4c555505985f67cac9b42a4a9ace201d"} err="failed to get container status \"53a9bed32b2e554c79badda1d74c143f4c555505985f67cac9b42a4a9ace201d\": rpc error: code = NotFound desc = could not find container \"53a9bed32b2e554c79badda1d74c143f4c555505985f67cac9b42a4a9ace201d\": container with ID starting with 53a9bed32b2e554c79badda1d74c143f4c555505985f67cac9b42a4a9ace201d not found: ID does not exist" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.219936 5024 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.236327 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.236307726 podStartE2EDuration="2.236307726s" podCreationTimestamp="2025-11-28 17:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:26:15.110904784 +0000 UTC m=+1677.159825699" watchObservedRunningTime="2025-11-28 17:26:15.236307726 +0000 UTC m=+1677.285228631" Nov 28 17:26:15 crc kubenswrapper[5024]: W1128 17:26:15.242085 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode558c904_f9dd_4fe7_8a76_80935850c018.slice/crio-0847209775b5c4c3436faf390159bed469f479731f876b5166a554bfc785a439 WatchSource:0}: Error finding container 0847209775b5c4c3436faf390159bed469f479731f876b5166a554bfc785a439: Status 404 returned error can't find the container with id 0847209775b5c4c3436faf390159bed469f479731f876b5166a554bfc785a439 Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.245453 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.444738 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.476194 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.504217 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:26:15 crc kubenswrapper[5024]: E1128 17:26:15.505289 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" containerName="nova-metadata-metadata" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.505315 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" containerName="nova-metadata-metadata" Nov 28 17:26:15 crc kubenswrapper[5024]: E1128 17:26:15.505383 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" containerName="nova-metadata-log" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.505394 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" containerName="nova-metadata-log" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.506148 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" containerName="nova-metadata-metadata" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.506282 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" containerName="nova-metadata-log" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.512475 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.518213 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.518463 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.524425 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.635620 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca6fc94f-0267-49f2-8af3-269c86335d27-logs\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.636085 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca6fc94f-0267-49f2-8af3-269c86335d27-config-data\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.636170 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cx5q\" (UniqueName: \"kubernetes.io/projected/ca6fc94f-0267-49f2-8af3-269c86335d27-kube-api-access-2cx5q\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.636345 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca6fc94f-0267-49f2-8af3-269c86335d27-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.636401 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca6fc94f-0267-49f2-8af3-269c86335d27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.831168 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca6fc94f-0267-49f2-8af3-269c86335d27-logs\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.831292 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca6fc94f-0267-49f2-8af3-269c86335d27-config-data\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.831338 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cx5q\" (UniqueName: \"kubernetes.io/projected/ca6fc94f-0267-49f2-8af3-269c86335d27-kube-api-access-2cx5q\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.831444 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca6fc94f-0267-49f2-8af3-269c86335d27-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.831478 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca6fc94f-0267-49f2-8af3-269c86335d27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.834810 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca6fc94f-0267-49f2-8af3-269c86335d27-logs\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.842706 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca6fc94f-0267-49f2-8af3-269c86335d27-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.843947 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca6fc94f-0267-49f2-8af3-269c86335d27-config-data\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.856112 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca6fc94f-0267-49f2-8af3-269c86335d27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:15 crc kubenswrapper[5024]: I1128 17:26:15.865829 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cx5q\" (UniqueName: \"kubernetes.io/projected/ca6fc94f-0267-49f2-8af3-269c86335d27-kube-api-access-2cx5q\") pod \"nova-metadata-0\" (UID: \"ca6fc94f-0267-49f2-8af3-269c86335d27\") " pod="openstack/nova-metadata-0" Nov 28 17:26:16 crc kubenswrapper[5024]: I1128 17:26:16.112263 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9de4afa0-2f07-41a3-bf8c-a3b3cd056922","Type":"ContainerStarted","Data":"0837dae4b3148da7de7c4b9524f7912efd77b46387fe2f44b58ebd76bf90aace"} Nov 28 17:26:16 crc kubenswrapper[5024]: I1128 17:26:16.115568 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e558c904-f9dd-4fe7-8a76-80935850c018","Type":"ContainerStarted","Data":"7e317a4ab47fafa0088f1ceda3da907a55118c04fb09bf0a268e5be9f1fa37ff"} Nov 28 17:26:16 crc kubenswrapper[5024]: I1128 17:26:16.115713 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e558c904-f9dd-4fe7-8a76-80935850c018","Type":"ContainerStarted","Data":"0847209775b5c4c3436faf390159bed469f479731f876b5166a554bfc785a439"} Nov 28 17:26:16 crc kubenswrapper[5024]: I1128 17:26:16.132824 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:26:16 crc kubenswrapper[5024]: I1128 17:26:16.148621 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.148597761 podStartE2EDuration="2.148597761s" podCreationTimestamp="2025-11-28 17:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:26:16.130047328 +0000 UTC m=+1678.178968233" watchObservedRunningTime="2025-11-28 17:26:16.148597761 +0000 UTC m=+1678.197518666" Nov 28 17:26:16 crc kubenswrapper[5024]: I1128 17:26:16.515117 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad" path="/var/lib/kubelet/pods/ddcbee94-d1aa-4cc0-a8ff-7bd9fb4e03ad/volumes" Nov 28 17:26:16 crc kubenswrapper[5024]: I1128 17:26:16.614912 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:26:16 crc kubenswrapper[5024]: W1128 17:26:16.624004 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca6fc94f_0267_49f2_8af3_269c86335d27.slice/crio-ac8b0d05895522d25d80f31759776397b11f21ff63130b0ba430c2672e69e6b8 WatchSource:0}: Error finding container ac8b0d05895522d25d80f31759776397b11f21ff63130b0ba430c2672e69e6b8: Status 404 returned error can't find the container with id ac8b0d05895522d25d80f31759776397b11f21ff63130b0ba430c2672e69e6b8 Nov 28 17:26:17 crc kubenswrapper[5024]: I1128 17:26:17.127033 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ca6fc94f-0267-49f2-8af3-269c86335d27","Type":"ContainerStarted","Data":"a164a4b9cd4d8eabe088b6f67fce9b2433b962e8c640a08539bd380a6a2df62d"} Nov 28 17:26:17 crc kubenswrapper[5024]: I1128 17:26:17.128210 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ca6fc94f-0267-49f2-8af3-269c86335d27","Type":"ContainerStarted","Data":"ac8b0d05895522d25d80f31759776397b11f21ff63130b0ba430c2672e69e6b8"} Nov 28 17:26:18 crc kubenswrapper[5024]: I1128 17:26:18.140178 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ca6fc94f-0267-49f2-8af3-269c86335d27","Type":"ContainerStarted","Data":"c8e02da15fa6c908994d9d88119d9f58ea16e940eafa9ac2f622462f7a3704be"} Nov 28 17:26:18 crc kubenswrapper[5024]: I1128 17:26:18.169922 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.169896969 podStartE2EDuration="3.169896969s" podCreationTimestamp="2025-11-28 17:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:26:18.159292794 +0000 UTC m=+1680.208213729" watchObservedRunningTime="2025-11-28 17:26:18.169896969 +0000 UTC m=+1680.218817874" Nov 28 17:26:19 crc kubenswrapper[5024]: I1128 17:26:19.616907 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 17:26:21 crc kubenswrapper[5024]: I1128 17:26:21.133741 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 17:26:21 crc kubenswrapper[5024]: I1128 17:26:21.134090 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 17:26:23 crc kubenswrapper[5024]: I1128 17:26:23.703246 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 17:26:23 crc kubenswrapper[5024]: I1128 17:26:23.703611 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 17:26:24 crc kubenswrapper[5024]: I1128 17:26:24.617236 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 17:26:24 crc kubenswrapper[5024]: I1128 17:26:24.647592 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 17:26:24 crc kubenswrapper[5024]: I1128 17:26:24.717206 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9de4afa0-2f07-41a3-bf8c-a3b3cd056922" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.1:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:26:24 crc kubenswrapper[5024]: I1128 17:26:24.717206 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9de4afa0-2f07-41a3-bf8c-a3b3cd056922" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.1:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:26:25 crc kubenswrapper[5024]: I1128 17:26:25.259788 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 17:26:25 crc kubenswrapper[5024]: I1128 17:26:25.499556 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:26:25 crc kubenswrapper[5024]: E1128 17:26:25.500062 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:26:26 crc kubenswrapper[5024]: I1128 17:26:26.133961 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 17:26:26 crc kubenswrapper[5024]: I1128 17:26:26.134056 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 17:26:27 crc kubenswrapper[5024]: I1128 17:26:27.147189 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ca6fc94f-0267-49f2-8af3-269c86335d27" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.3:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:26:27 crc kubenswrapper[5024]: I1128 17:26:27.147774 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ca6fc94f-0267-49f2-8af3-269c86335d27" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.3:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:26:29 crc kubenswrapper[5024]: I1128 17:26:29.397948 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 17:26:33 crc kubenswrapper[5024]: I1128 17:26:33.712889 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:26:33 crc kubenswrapper[5024]: I1128 17:26:33.713701 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c48cac67-542a-4982-98f3-19161065f4fc" containerName="kube-state-metrics" containerID="cri-o://b41560ff1c9095e5c76c904102f2614192b2323b7c5a0a7e0ea7b0b8808bed08" gracePeriod=30 Nov 28 17:26:33 crc kubenswrapper[5024]: I1128 17:26:33.720951 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 17:26:33 crc kubenswrapper[5024]: I1128 17:26:33.721759 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 17:26:33 crc kubenswrapper[5024]: I1128 17:26:33.731495 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 17:26:33 crc kubenswrapper[5024]: I1128 17:26:33.752723 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 17:26:33 crc kubenswrapper[5024]: I1128 17:26:33.806620 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 17:26:33 crc kubenswrapper[5024]: I1128 17:26:33.806829 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="47a2db16-e493-45bc-b0ab-7606965b1612" containerName="mysqld-exporter" containerID="cri-o://cbd840d182f848c421656b0710596616878591dc5a5c9cd3541b49ea8670a7dc" gracePeriod=30 Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.356760 5024 generic.go:334] "Generic (PLEG): container finished" podID="c48cac67-542a-4982-98f3-19161065f4fc" containerID="b41560ff1c9095e5c76c904102f2614192b2323b7c5a0a7e0ea7b0b8808bed08" exitCode=2 Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.357227 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c48cac67-542a-4982-98f3-19161065f4fc","Type":"ContainerDied","Data":"b41560ff1c9095e5c76c904102f2614192b2323b7c5a0a7e0ea7b0b8808bed08"} Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.357288 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c48cac67-542a-4982-98f3-19161065f4fc","Type":"ContainerDied","Data":"90f27a88dc71fbd067518ed68fab1cf74919129b34ba4dbd5f91530ef61045a6"} Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.357305 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90f27a88dc71fbd067518ed68fab1cf74919129b34ba4dbd5f91530ef61045a6" Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.359775 5024 generic.go:334] "Generic (PLEG): container finished" podID="47a2db16-e493-45bc-b0ab-7606965b1612" containerID="cbd840d182f848c421656b0710596616878591dc5a5c9cd3541b49ea8670a7dc" exitCode=2 Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.359863 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"47a2db16-e493-45bc-b0ab-7606965b1612","Type":"ContainerDied","Data":"cbd840d182f848c421656b0710596616878591dc5a5c9cd3541b49ea8670a7dc"} Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.359908 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"47a2db16-e493-45bc-b0ab-7606965b1612","Type":"ContainerDied","Data":"fa2fa941404488f36bc0e431d3f7a9a41ea014f56e88f1df38a184ec5746bba7"} Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.359921 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa2fa941404488f36bc0e431d3f7a9a41ea014f56e88f1df38a184ec5746bba7" Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.360167 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.369048 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.448202 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.450841 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.492052 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a2db16-e493-45bc-b0ab-7606965b1612-combined-ca-bundle\") pod \"47a2db16-e493-45bc-b0ab-7606965b1612\" (UID: \"47a2db16-e493-45bc-b0ab-7606965b1612\") " Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.492152 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l6wx\" (UniqueName: \"kubernetes.io/projected/c48cac67-542a-4982-98f3-19161065f4fc-kube-api-access-4l6wx\") pod \"c48cac67-542a-4982-98f3-19161065f4fc\" (UID: \"c48cac67-542a-4982-98f3-19161065f4fc\") " Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.492264 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a2db16-e493-45bc-b0ab-7606965b1612-config-data\") pod \"47a2db16-e493-45bc-b0ab-7606965b1612\" (UID: \"47a2db16-e493-45bc-b0ab-7606965b1612\") " Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.492371 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdlmm\" (UniqueName: \"kubernetes.io/projected/47a2db16-e493-45bc-b0ab-7606965b1612-kube-api-access-cdlmm\") pod \"47a2db16-e493-45bc-b0ab-7606965b1612\" (UID: \"47a2db16-e493-45bc-b0ab-7606965b1612\") " Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.500415 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a2db16-e493-45bc-b0ab-7606965b1612-kube-api-access-cdlmm" (OuterVolumeSpecName: "kube-api-access-cdlmm") pod "47a2db16-e493-45bc-b0ab-7606965b1612" (UID: "47a2db16-e493-45bc-b0ab-7606965b1612"). InnerVolumeSpecName "kube-api-access-cdlmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.508651 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c48cac67-542a-4982-98f3-19161065f4fc-kube-api-access-4l6wx" (OuterVolumeSpecName: "kube-api-access-4l6wx") pod "c48cac67-542a-4982-98f3-19161065f4fc" (UID: "c48cac67-542a-4982-98f3-19161065f4fc"). InnerVolumeSpecName "kube-api-access-4l6wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.595253 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l6wx\" (UniqueName: \"kubernetes.io/projected/c48cac67-542a-4982-98f3-19161065f4fc-kube-api-access-4l6wx\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.595283 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdlmm\" (UniqueName: \"kubernetes.io/projected/47a2db16-e493-45bc-b0ab-7606965b1612-kube-api-access-cdlmm\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.605355 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a2db16-e493-45bc-b0ab-7606965b1612-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47a2db16-e493-45bc-b0ab-7606965b1612" (UID: "47a2db16-e493-45bc-b0ab-7606965b1612"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.637623 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a2db16-e493-45bc-b0ab-7606965b1612-config-data" (OuterVolumeSpecName: "config-data") pod "47a2db16-e493-45bc-b0ab-7606965b1612" (UID: "47a2db16-e493-45bc-b0ab-7606965b1612"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.697945 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47a2db16-e493-45bc-b0ab-7606965b1612-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:34 crc kubenswrapper[5024]: I1128 17:26:34.697981 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47a2db16-e493-45bc-b0ab-7606965b1612-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.370410 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.370415 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.409198 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.420779 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.435570 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.451153 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.468554 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:26:35 crc kubenswrapper[5024]: E1128 17:26:35.469250 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48cac67-542a-4982-98f3-19161065f4fc" containerName="kube-state-metrics" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.469271 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48cac67-542a-4982-98f3-19161065f4fc" containerName="kube-state-metrics" Nov 28 17:26:35 crc kubenswrapper[5024]: E1128 17:26:35.469332 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a2db16-e493-45bc-b0ab-7606965b1612" containerName="mysqld-exporter" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.469342 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a2db16-e493-45bc-b0ab-7606965b1612" containerName="mysqld-exporter" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.469610 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="47a2db16-e493-45bc-b0ab-7606965b1612" containerName="mysqld-exporter" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.469633 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c48cac67-542a-4982-98f3-19161065f4fc" containerName="kube-state-metrics" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.470661 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.481690 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.481752 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.494762 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.497562 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.505528 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.509134 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.521692 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.575061 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.638563 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38186f7-7448-4cdd-8f18-0336385c33ad-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c38186f7-7448-4cdd-8f18-0336385c33ad\") " pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.639834 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmlxd\" (UniqueName: \"kubernetes.io/projected/c38186f7-7448-4cdd-8f18-0336385c33ad-kube-api-access-hmlxd\") pod \"kube-state-metrics-0\" (UID: \"c38186f7-7448-4cdd-8f18-0336385c33ad\") " pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.639906 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8plt8\" (UniqueName: \"kubernetes.io/projected/141d5e1c-7eb9-40c1-9855-c048660125f6-kube-api-access-8plt8\") pod \"mysqld-exporter-0\" (UID: \"141d5e1c-7eb9-40c1-9855-c048660125f6\") " pod="openstack/mysqld-exporter-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.640000 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c38186f7-7448-4cdd-8f18-0336385c33ad-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c38186f7-7448-4cdd-8f18-0336385c33ad\") " pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.640281 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/141d5e1c-7eb9-40c1-9855-c048660125f6-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"141d5e1c-7eb9-40c1-9855-c048660125f6\") " pod="openstack/mysqld-exporter-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.640441 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38186f7-7448-4cdd-8f18-0336385c33ad-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c38186f7-7448-4cdd-8f18-0336385c33ad\") " pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.640693 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141d5e1c-7eb9-40c1-9855-c048660125f6-config-data\") pod \"mysqld-exporter-0\" (UID: \"141d5e1c-7eb9-40c1-9855-c048660125f6\") " pod="openstack/mysqld-exporter-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.640763 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141d5e1c-7eb9-40c1-9855-c048660125f6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"141d5e1c-7eb9-40c1-9855-c048660125f6\") " pod="openstack/mysqld-exporter-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.742962 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38186f7-7448-4cdd-8f18-0336385c33ad-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c38186f7-7448-4cdd-8f18-0336385c33ad\") " pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.744237 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmlxd\" (UniqueName: \"kubernetes.io/projected/c38186f7-7448-4cdd-8f18-0336385c33ad-kube-api-access-hmlxd\") pod \"kube-state-metrics-0\" (UID: \"c38186f7-7448-4cdd-8f18-0336385c33ad\") " pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.744276 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8plt8\" (UniqueName: \"kubernetes.io/projected/141d5e1c-7eb9-40c1-9855-c048660125f6-kube-api-access-8plt8\") pod \"mysqld-exporter-0\" (UID: \"141d5e1c-7eb9-40c1-9855-c048660125f6\") " pod="openstack/mysqld-exporter-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.744327 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c38186f7-7448-4cdd-8f18-0336385c33ad-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c38186f7-7448-4cdd-8f18-0336385c33ad\") " pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.744472 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/141d5e1c-7eb9-40c1-9855-c048660125f6-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"141d5e1c-7eb9-40c1-9855-c048660125f6\") " pod="openstack/mysqld-exporter-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.744563 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38186f7-7448-4cdd-8f18-0336385c33ad-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c38186f7-7448-4cdd-8f18-0336385c33ad\") " pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.744685 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141d5e1c-7eb9-40c1-9855-c048660125f6-config-data\") pod \"mysqld-exporter-0\" (UID: \"141d5e1c-7eb9-40c1-9855-c048660125f6\") " pod="openstack/mysqld-exporter-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.744723 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141d5e1c-7eb9-40c1-9855-c048660125f6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"141d5e1c-7eb9-40c1-9855-c048660125f6\") " pod="openstack/mysqld-exporter-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.748335 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141d5e1c-7eb9-40c1-9855-c048660125f6-config-data\") pod \"mysqld-exporter-0\" (UID: \"141d5e1c-7eb9-40c1-9855-c048660125f6\") " pod="openstack/mysqld-exporter-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.748992 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38186f7-7448-4cdd-8f18-0336385c33ad-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c38186f7-7448-4cdd-8f18-0336385c33ad\") " pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.749320 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/141d5e1c-7eb9-40c1-9855-c048660125f6-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"141d5e1c-7eb9-40c1-9855-c048660125f6\") " pod="openstack/mysqld-exporter-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.750395 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c38186f7-7448-4cdd-8f18-0336385c33ad-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c38186f7-7448-4cdd-8f18-0336385c33ad\") " pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.751194 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141d5e1c-7eb9-40c1-9855-c048660125f6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"141d5e1c-7eb9-40c1-9855-c048660125f6\") " pod="openstack/mysqld-exporter-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.751592 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38186f7-7448-4cdd-8f18-0336385c33ad-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c38186f7-7448-4cdd-8f18-0336385c33ad\") " pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.765298 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmlxd\" (UniqueName: \"kubernetes.io/projected/c38186f7-7448-4cdd-8f18-0336385c33ad-kube-api-access-hmlxd\") pod \"kube-state-metrics-0\" (UID: \"c38186f7-7448-4cdd-8f18-0336385c33ad\") " pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.765384 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8plt8\" (UniqueName: \"kubernetes.io/projected/141d5e1c-7eb9-40c1-9855-c048660125f6-kube-api-access-8plt8\") pod \"mysqld-exporter-0\" (UID: \"141d5e1c-7eb9-40c1-9855-c048660125f6\") " pod="openstack/mysqld-exporter-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.825225 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 17:26:35 crc kubenswrapper[5024]: I1128 17:26:35.848095 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 28 17:26:36 crc kubenswrapper[5024]: I1128 17:26:36.140837 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 17:26:36 crc kubenswrapper[5024]: I1128 17:26:36.142252 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 17:26:36 crc kubenswrapper[5024]: I1128 17:26:36.156718 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 17:26:36 crc kubenswrapper[5024]: I1128 17:26:36.332981 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:26:36 crc kubenswrapper[5024]: I1128 17:26:36.333382 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="ceilometer-central-agent" containerID="cri-o://04429f7cabbc02698fbb0da96ec0f96adb3ac4bb72a4313118de96fcbfeb32e6" gracePeriod=30 Nov 28 17:26:36 crc kubenswrapper[5024]: I1128 17:26:36.333516 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="proxy-httpd" containerID="cri-o://5de478df90ea4389390242e5b868db719cfb6a30e03e75c2867d0200cdacfd01" gracePeriod=30 Nov 28 17:26:36 crc kubenswrapper[5024]: I1128 17:26:36.333573 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="sg-core" containerID="cri-o://ba86c445cf22f980125961ce40e90ded99dd6b5d05e59e90dc3c59cd97d1246d" gracePeriod=30 Nov 28 17:26:36 crc kubenswrapper[5024]: I1128 17:26:36.333633 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="ceilometer-notification-agent" containerID="cri-o://ae11a0410cd4b8e465d555568845c1d900d38d8a3eb632674eea2086e8a26178" gracePeriod=30 Nov 28 17:26:36 crc kubenswrapper[5024]: I1128 17:26:36.477773 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:26:36 crc kubenswrapper[5024]: W1128 17:26:36.495667 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc38186f7_7448_4cdd_8f18_0336385c33ad.slice/crio-86a75ee4bd4c7bead2f1bc7df4d68653ce127f7b15b527fc520f53b0e236955b WatchSource:0}: Error finding container 86a75ee4bd4c7bead2f1bc7df4d68653ce127f7b15b527fc520f53b0e236955b: Status 404 returned error can't find the container with id 86a75ee4bd4c7bead2f1bc7df4d68653ce127f7b15b527fc520f53b0e236955b Nov 28 17:26:36 crc kubenswrapper[5024]: I1128 17:26:36.498354 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:26:36 crc kubenswrapper[5024]: I1128 17:26:36.519288 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a2db16-e493-45bc-b0ab-7606965b1612" path="/var/lib/kubelet/pods/47a2db16-e493-45bc-b0ab-7606965b1612/volumes" Nov 28 17:26:36 crc kubenswrapper[5024]: I1128 17:26:36.519882 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c48cac67-542a-4982-98f3-19161065f4fc" path="/var/lib/kubelet/pods/c48cac67-542a-4982-98f3-19161065f4fc/volumes" Nov 28 17:26:36 crc kubenswrapper[5024]: I1128 17:26:36.581872 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 28 17:26:36 crc kubenswrapper[5024]: W1128 17:26:36.584811 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod141d5e1c_7eb9_40c1_9855_c048660125f6.slice/crio-9a95329242eb21c3768ddb6b2d0fe1fd6ee5a1c52d839d1e60a4d80b58030234 WatchSource:0}: Error finding container 9a95329242eb21c3768ddb6b2d0fe1fd6ee5a1c52d839d1e60a4d80b58030234: Status 404 returned error can't find the container with id 9a95329242eb21c3768ddb6b2d0fe1fd6ee5a1c52d839d1e60a4d80b58030234 Nov 28 17:26:36 crc kubenswrapper[5024]: I1128 17:26:36.585162 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 17:26:37 crc kubenswrapper[5024]: I1128 17:26:37.399932 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c38186f7-7448-4cdd-8f18-0336385c33ad","Type":"ContainerStarted","Data":"d81b1682593f35355c949342f5ae00c6c6c107799bd8d45ca259ab6bc620446c"} Nov 28 17:26:37 crc kubenswrapper[5024]: I1128 17:26:37.400316 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c38186f7-7448-4cdd-8f18-0336385c33ad","Type":"ContainerStarted","Data":"86a75ee4bd4c7bead2f1bc7df4d68653ce127f7b15b527fc520f53b0e236955b"} Nov 28 17:26:37 crc kubenswrapper[5024]: I1128 17:26:37.401815 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 17:26:37 crc kubenswrapper[5024]: I1128 17:26:37.403996 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"141d5e1c-7eb9-40c1-9855-c048660125f6","Type":"ContainerStarted","Data":"9a95329242eb21c3768ddb6b2d0fe1fd6ee5a1c52d839d1e60a4d80b58030234"} Nov 28 17:26:37 crc kubenswrapper[5024]: I1128 17:26:37.407308 5024 generic.go:334] "Generic (PLEG): container finished" podID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerID="5de478df90ea4389390242e5b868db719cfb6a30e03e75c2867d0200cdacfd01" exitCode=0 Nov 28 17:26:37 crc kubenswrapper[5024]: I1128 17:26:37.407338 5024 generic.go:334] "Generic (PLEG): container finished" podID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerID="ba86c445cf22f980125961ce40e90ded99dd6b5d05e59e90dc3c59cd97d1246d" exitCode=2 Nov 28 17:26:37 crc kubenswrapper[5024]: I1128 17:26:37.407347 5024 generic.go:334] "Generic (PLEG): container finished" podID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerID="04429f7cabbc02698fbb0da96ec0f96adb3ac4bb72a4313118de96fcbfeb32e6" exitCode=0 Nov 28 17:26:37 crc kubenswrapper[5024]: I1128 17:26:37.408774 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b912b68c-d877-472f-8c8d-68f1353ac3a0","Type":"ContainerDied","Data":"5de478df90ea4389390242e5b868db719cfb6a30e03e75c2867d0200cdacfd01"} Nov 28 17:26:37 crc kubenswrapper[5024]: I1128 17:26:37.408814 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b912b68c-d877-472f-8c8d-68f1353ac3a0","Type":"ContainerDied","Data":"ba86c445cf22f980125961ce40e90ded99dd6b5d05e59e90dc3c59cd97d1246d"} Nov 28 17:26:37 crc kubenswrapper[5024]: I1128 17:26:37.408830 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b912b68c-d877-472f-8c8d-68f1353ac3a0","Type":"ContainerDied","Data":"04429f7cabbc02698fbb0da96ec0f96adb3ac4bb72a4313118de96fcbfeb32e6"} Nov 28 17:26:38 crc kubenswrapper[5024]: I1128 17:26:38.423309 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"141d5e1c-7eb9-40c1-9855-c048660125f6","Type":"ContainerStarted","Data":"771e37c1985dc8d72f9224939e9c94102686f2e0ccd7f9cf6d26f6c6e42ced21"} Nov 28 17:26:38 crc kubenswrapper[5024]: I1128 17:26:38.444516 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.068214761 podStartE2EDuration="3.444496219s" podCreationTimestamp="2025-11-28 17:26:35 +0000 UTC" firstStartedPulling="2025-11-28 17:26:36.498144415 +0000 UTC m=+1698.547065320" lastFinishedPulling="2025-11-28 17:26:36.874425873 +0000 UTC m=+1698.923346778" observedRunningTime="2025-11-28 17:26:37.425384549 +0000 UTC m=+1699.474305454" watchObservedRunningTime="2025-11-28 17:26:38.444496219 +0000 UTC m=+1700.493417124" Nov 28 17:26:38 crc kubenswrapper[5024]: I1128 17:26:38.446188 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.113200981 podStartE2EDuration="3.446180887s" podCreationTimestamp="2025-11-28 17:26:35 +0000 UTC" firstStartedPulling="2025-11-28 17:26:36.590259828 +0000 UTC m=+1698.639180733" lastFinishedPulling="2025-11-28 17:26:37.923239734 +0000 UTC m=+1699.972160639" observedRunningTime="2025-11-28 17:26:38.443352675 +0000 UTC m=+1700.492273590" watchObservedRunningTime="2025-11-28 17:26:38.446180887 +0000 UTC m=+1700.495101792" Nov 28 17:26:40 crc kubenswrapper[5024]: I1128 17:26:40.500734 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:26:40 crc kubenswrapper[5024]: E1128 17:26:40.501675 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.477525 5024 generic.go:334] "Generic (PLEG): container finished" podID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerID="ae11a0410cd4b8e465d555568845c1d900d38d8a3eb632674eea2086e8a26178" exitCode=0 Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.477599 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b912b68c-d877-472f-8c8d-68f1353ac3a0","Type":"ContainerDied","Data":"ae11a0410cd4b8e465d555568845c1d900d38d8a3eb632674eea2086e8a26178"} Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.477951 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b912b68c-d877-472f-8c8d-68f1353ac3a0","Type":"ContainerDied","Data":"f8918e156386bd0d4ea418a940519630868a8b95f8ecad0a4384740fc12e09ec"} Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.477975 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8918e156386bd0d4ea418a940519630868a8b95f8ecad0a4384740fc12e09ec" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.534990 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.595907 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-combined-ca-bundle\") pod \"b912b68c-d877-472f-8c8d-68f1353ac3a0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.595949 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch2kp\" (UniqueName: \"kubernetes.io/projected/b912b68c-d877-472f-8c8d-68f1353ac3a0-kube-api-access-ch2kp\") pod \"b912b68c-d877-472f-8c8d-68f1353ac3a0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.604735 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b912b68c-d877-472f-8c8d-68f1353ac3a0-kube-api-access-ch2kp" (OuterVolumeSpecName: "kube-api-access-ch2kp") pod "b912b68c-d877-472f-8c8d-68f1353ac3a0" (UID: "b912b68c-d877-472f-8c8d-68f1353ac3a0"). InnerVolumeSpecName "kube-api-access-ch2kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.697780 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-config-data\") pod \"b912b68c-d877-472f-8c8d-68f1353ac3a0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.698115 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b912b68c-d877-472f-8c8d-68f1353ac3a0-log-httpd\") pod \"b912b68c-d877-472f-8c8d-68f1353ac3a0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.698188 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-scripts\") pod \"b912b68c-d877-472f-8c8d-68f1353ac3a0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.698241 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-sg-core-conf-yaml\") pod \"b912b68c-d877-472f-8c8d-68f1353ac3a0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.698371 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b912b68c-d877-472f-8c8d-68f1353ac3a0-run-httpd\") pod \"b912b68c-d877-472f-8c8d-68f1353ac3a0\" (UID: \"b912b68c-d877-472f-8c8d-68f1353ac3a0\") " Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.698590 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b912b68c-d877-472f-8c8d-68f1353ac3a0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b912b68c-d877-472f-8c8d-68f1353ac3a0" (UID: "b912b68c-d877-472f-8c8d-68f1353ac3a0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.698941 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b912b68c-d877-472f-8c8d-68f1353ac3a0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b912b68c-d877-472f-8c8d-68f1353ac3a0" (UID: "b912b68c-d877-472f-8c8d-68f1353ac3a0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.699038 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b912b68c-d877-472f-8c8d-68f1353ac3a0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.699055 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch2kp\" (UniqueName: \"kubernetes.io/projected/b912b68c-d877-472f-8c8d-68f1353ac3a0-kube-api-access-ch2kp\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.706687 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-scripts" (OuterVolumeSpecName: "scripts") pod "b912b68c-d877-472f-8c8d-68f1353ac3a0" (UID: "b912b68c-d877-472f-8c8d-68f1353ac3a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.713995 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b912b68c-d877-472f-8c8d-68f1353ac3a0" (UID: "b912b68c-d877-472f-8c8d-68f1353ac3a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.746137 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b912b68c-d877-472f-8c8d-68f1353ac3a0" (UID: "b912b68c-d877-472f-8c8d-68f1353ac3a0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.801368 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.801407 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.801417 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.801428 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b912b68c-d877-472f-8c8d-68f1353ac3a0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.822772 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-config-data" (OuterVolumeSpecName: "config-data") pod "b912b68c-d877-472f-8c8d-68f1353ac3a0" (UID: "b912b68c-d877-472f-8c8d-68f1353ac3a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:41 crc kubenswrapper[5024]: I1128 17:26:41.904087 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b912b68c-d877-472f-8c8d-68f1353ac3a0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.489882 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.689100 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.741417 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.802696 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:26:42 crc kubenswrapper[5024]: E1128 17:26:42.803676 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="proxy-httpd" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.803694 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="proxy-httpd" Nov 28 17:26:42 crc kubenswrapper[5024]: E1128 17:26:42.803717 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="sg-core" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.803723 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="sg-core" Nov 28 17:26:42 crc kubenswrapper[5024]: E1128 17:26:42.803752 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="ceilometer-notification-agent" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.803758 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="ceilometer-notification-agent" Nov 28 17:26:42 crc kubenswrapper[5024]: E1128 17:26:42.803782 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="ceilometer-central-agent" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.803788 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="ceilometer-central-agent" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.804061 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="ceilometer-central-agent" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.804077 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="proxy-httpd" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.804103 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="sg-core" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.804117 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" containerName="ceilometer-notification-agent" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.806666 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.810942 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.811004 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.811050 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.819563 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.859873 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.859949 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0391d8f9-2f67-416d-9a1a-849fcf7cb500-log-httpd\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.860037 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0391d8f9-2f67-416d-9a1a-849fcf7cb500-run-httpd\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.860067 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-scripts\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.860202 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vw2n\" (UniqueName: \"kubernetes.io/projected/0391d8f9-2f67-416d-9a1a-849fcf7cb500-kube-api-access-2vw2n\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.860238 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.860505 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-config-data\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.860619 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.962447 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vw2n\" (UniqueName: \"kubernetes.io/projected/0391d8f9-2f67-416d-9a1a-849fcf7cb500-kube-api-access-2vw2n\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.962506 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.962609 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-config-data\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.962636 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.962692 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.963539 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0391d8f9-2f67-416d-9a1a-849fcf7cb500-log-httpd\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.963923 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0391d8f9-2f67-416d-9a1a-849fcf7cb500-log-httpd\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.964087 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0391d8f9-2f67-416d-9a1a-849fcf7cb500-run-httpd\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.964123 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-scripts\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.964832 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0391d8f9-2f67-416d-9a1a-849fcf7cb500-run-httpd\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.969626 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.969926 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.970114 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-config-data\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.970504 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-scripts\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.971878 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:42 crc kubenswrapper[5024]: I1128 17:26:42.989740 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vw2n\" (UniqueName: \"kubernetes.io/projected/0391d8f9-2f67-416d-9a1a-849fcf7cb500-kube-api-access-2vw2n\") pod \"ceilometer-0\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " pod="openstack/ceilometer-0" Nov 28 17:26:43 crc kubenswrapper[5024]: I1128 17:26:43.132280 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:26:43 crc kubenswrapper[5024]: I1128 17:26:43.681924 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:26:44 crc kubenswrapper[5024]: I1128 17:26:44.511642 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b912b68c-d877-472f-8c8d-68f1353ac3a0" path="/var/lib/kubelet/pods/b912b68c-d877-472f-8c8d-68f1353ac3a0/volumes" Nov 28 17:26:44 crc kubenswrapper[5024]: I1128 17:26:44.529279 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0391d8f9-2f67-416d-9a1a-849fcf7cb500","Type":"ContainerStarted","Data":"3ec33dadc9f705fe96fbc82342eaf0269ccb7e906edd878fb7988f298a7d9c0d"} Nov 28 17:26:45 crc kubenswrapper[5024]: I1128 17:26:45.546680 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0391d8f9-2f67-416d-9a1a-849fcf7cb500","Type":"ContainerStarted","Data":"0b2fba5af1fae99f388f06bf30340d3887b04fad1b592f4fa47794818f6c63bb"} Nov 28 17:26:45 crc kubenswrapper[5024]: I1128 17:26:45.841719 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 17:26:46 crc kubenswrapper[5024]: I1128 17:26:46.560486 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0391d8f9-2f67-416d-9a1a-849fcf7cb500","Type":"ContainerStarted","Data":"0bb7c34e526f3a7b9013457cb485a2d25e7085e6f45262794a825ac398d4791a"} Nov 28 17:26:47 crc kubenswrapper[5024]: I1128 17:26:47.575320 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0391d8f9-2f67-416d-9a1a-849fcf7cb500","Type":"ContainerStarted","Data":"f2d13446a2387a99a8619c4cf10a4f126166964bae32594f2aacc7682df50410"} Nov 28 17:26:50 crc kubenswrapper[5024]: I1128 17:26:50.612967 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0391d8f9-2f67-416d-9a1a-849fcf7cb500","Type":"ContainerStarted","Data":"f97b74a3b09f014f05efe6a017bec93724172b9c68d6a011882f7de6f09455a4"} Nov 28 17:26:50 crc kubenswrapper[5024]: I1128 17:26:50.613685 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:26:50 crc kubenswrapper[5024]: I1128 17:26:50.646545 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8658359620000002 podStartE2EDuration="8.646522508s" podCreationTimestamp="2025-11-28 17:26:42 +0000 UTC" firstStartedPulling="2025-11-28 17:26:43.690877611 +0000 UTC m=+1705.739798516" lastFinishedPulling="2025-11-28 17:26:49.471564157 +0000 UTC m=+1711.520485062" observedRunningTime="2025-11-28 17:26:50.636010353 +0000 UTC m=+1712.684931268" watchObservedRunningTime="2025-11-28 17:26:50.646522508 +0000 UTC m=+1712.695443413" Nov 28 17:26:54 crc kubenswrapper[5024]: I1128 17:26:54.498782 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:26:54 crc kubenswrapper[5024]: E1128 17:26:54.499259 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:27:05 crc kubenswrapper[5024]: I1128 17:27:05.498463 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:27:05 crc kubenswrapper[5024]: E1128 17:27:05.499336 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:27:13 crc kubenswrapper[5024]: I1128 17:27:13.144322 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 17:27:16 crc kubenswrapper[5024]: I1128 17:27:16.501853 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:27:16 crc kubenswrapper[5024]: E1128 17:27:16.503031 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.102595 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-gsz7r"] Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.113899 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-gsz7r"] Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.271481 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-hbbrk"] Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.273579 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hbbrk" Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.290993 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-hbbrk"] Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.381948 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7fb5de-075a-4c27-a648-e6762bd7c941-combined-ca-bundle\") pod \"heat-db-sync-hbbrk\" (UID: \"3a7fb5de-075a-4c27-a648-e6762bd7c941\") " pod="openstack/heat-db-sync-hbbrk" Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.382006 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a7fb5de-075a-4c27-a648-e6762bd7c941-config-data\") pod \"heat-db-sync-hbbrk\" (UID: \"3a7fb5de-075a-4c27-a648-e6762bd7c941\") " pod="openstack/heat-db-sync-hbbrk" Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.382045 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hlb8\" (UniqueName: \"kubernetes.io/projected/3a7fb5de-075a-4c27-a648-e6762bd7c941-kube-api-access-6hlb8\") pod \"heat-db-sync-hbbrk\" (UID: \"3a7fb5de-075a-4c27-a648-e6762bd7c941\") " pod="openstack/heat-db-sync-hbbrk" Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.483676 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7fb5de-075a-4c27-a648-e6762bd7c941-combined-ca-bundle\") pod \"heat-db-sync-hbbrk\" (UID: \"3a7fb5de-075a-4c27-a648-e6762bd7c941\") " pod="openstack/heat-db-sync-hbbrk" Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.484045 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a7fb5de-075a-4c27-a648-e6762bd7c941-config-data\") pod \"heat-db-sync-hbbrk\" (UID: \"3a7fb5de-075a-4c27-a648-e6762bd7c941\") " pod="openstack/heat-db-sync-hbbrk" Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.484080 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hlb8\" (UniqueName: \"kubernetes.io/projected/3a7fb5de-075a-4c27-a648-e6762bd7c941-kube-api-access-6hlb8\") pod \"heat-db-sync-hbbrk\" (UID: \"3a7fb5de-075a-4c27-a648-e6762bd7c941\") " pod="openstack/heat-db-sync-hbbrk" Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.498031 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7fb5de-075a-4c27-a648-e6762bd7c941-combined-ca-bundle\") pod \"heat-db-sync-hbbrk\" (UID: \"3a7fb5de-075a-4c27-a648-e6762bd7c941\") " pod="openstack/heat-db-sync-hbbrk" Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.499870 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a7fb5de-075a-4c27-a648-e6762bd7c941-config-data\") pod \"heat-db-sync-hbbrk\" (UID: \"3a7fb5de-075a-4c27-a648-e6762bd7c941\") " pod="openstack/heat-db-sync-hbbrk" Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.503507 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hlb8\" (UniqueName: \"kubernetes.io/projected/3a7fb5de-075a-4c27-a648-e6762bd7c941-kube-api-access-6hlb8\") pod \"heat-db-sync-hbbrk\" (UID: \"3a7fb5de-075a-4c27-a648-e6762bd7c941\") " pod="openstack/heat-db-sync-hbbrk" Nov 28 17:27:25 crc kubenswrapper[5024]: I1128 17:27:25.617856 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hbbrk" Nov 28 17:27:26 crc kubenswrapper[5024]: I1128 17:27:26.108665 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-hbbrk"] Nov 28 17:27:26 crc kubenswrapper[5024]: I1128 17:27:26.515944 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b6fe11-1216-4090-b1eb-fb7516bd0977" path="/var/lib/kubelet/pods/a2b6fe11-1216-4090-b1eb-fb7516bd0977/volumes" Nov 28 17:27:27 crc kubenswrapper[5024]: I1128 17:27:27.101897 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hbbrk" event={"ID":"3a7fb5de-075a-4c27-a648-e6762bd7c941","Type":"ContainerStarted","Data":"d439e7d6cf3e2820cb998b0d0b0de34c348d0a6df079fc57e30b2e3ad858c6e9"} Nov 28 17:27:27 crc kubenswrapper[5024]: I1128 17:27:27.118456 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:27:27 crc kubenswrapper[5024]: I1128 17:27:27.698491 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:27:27 crc kubenswrapper[5024]: I1128 17:27:27.698870 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="ceilometer-central-agent" containerID="cri-o://0b2fba5af1fae99f388f06bf30340d3887b04fad1b592f4fa47794818f6c63bb" gracePeriod=30 Nov 28 17:27:27 crc kubenswrapper[5024]: I1128 17:27:27.699061 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="sg-core" containerID="cri-o://f2d13446a2387a99a8619c4cf10a4f126166964bae32594f2aacc7682df50410" gracePeriod=30 Nov 28 17:27:27 crc kubenswrapper[5024]: I1128 17:27:27.699134 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="ceilometer-notification-agent" containerID="cri-o://0bb7c34e526f3a7b9013457cb485a2d25e7085e6f45262794a825ac398d4791a" gracePeriod=30 Nov 28 17:27:27 crc kubenswrapper[5024]: I1128 17:27:27.699206 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="proxy-httpd" containerID="cri-o://f97b74a3b09f014f05efe6a017bec93724172b9c68d6a011882f7de6f09455a4" gracePeriod=30 Nov 28 17:27:28 crc kubenswrapper[5024]: I1128 17:27:28.123534 5024 generic.go:334] "Generic (PLEG): container finished" podID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerID="f97b74a3b09f014f05efe6a017bec93724172b9c68d6a011882f7de6f09455a4" exitCode=0 Nov 28 17:27:28 crc kubenswrapper[5024]: I1128 17:27:28.123567 5024 generic.go:334] "Generic (PLEG): container finished" podID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerID="f2d13446a2387a99a8619c4cf10a4f126166964bae32594f2aacc7682df50410" exitCode=2 Nov 28 17:27:28 crc kubenswrapper[5024]: I1128 17:27:28.123591 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0391d8f9-2f67-416d-9a1a-849fcf7cb500","Type":"ContainerDied","Data":"f97b74a3b09f014f05efe6a017bec93724172b9c68d6a011882f7de6f09455a4"} Nov 28 17:27:28 crc kubenswrapper[5024]: I1128 17:27:28.123619 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0391d8f9-2f67-416d-9a1a-849fcf7cb500","Type":"ContainerDied","Data":"f2d13446a2387a99a8619c4cf10a4f126166964bae32594f2aacc7682df50410"} Nov 28 17:27:28 crc kubenswrapper[5024]: I1128 17:27:28.512167 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:27:28 crc kubenswrapper[5024]: E1128 17:27:28.512689 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:27:28 crc kubenswrapper[5024]: I1128 17:27:28.600443 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.179747 5024 generic.go:334] "Generic (PLEG): container finished" podID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerID="0bb7c34e526f3a7b9013457cb485a2d25e7085e6f45262794a825ac398d4791a" exitCode=0 Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.180051 5024 generic.go:334] "Generic (PLEG): container finished" podID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerID="0b2fba5af1fae99f388f06bf30340d3887b04fad1b592f4fa47794818f6c63bb" exitCode=0 Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.180077 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0391d8f9-2f67-416d-9a1a-849fcf7cb500","Type":"ContainerDied","Data":"0bb7c34e526f3a7b9013457cb485a2d25e7085e6f45262794a825ac398d4791a"} Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.180104 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0391d8f9-2f67-416d-9a1a-849fcf7cb500","Type":"ContainerDied","Data":"0b2fba5af1fae99f388f06bf30340d3887b04fad1b592f4fa47794818f6c63bb"} Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.726341 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.782638 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-combined-ca-bundle\") pod \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.782749 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-ceilometer-tls-certs\") pod \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.782802 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-config-data\") pod \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.782857 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0391d8f9-2f67-416d-9a1a-849fcf7cb500-run-httpd\") pod \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.782945 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0391d8f9-2f67-416d-9a1a-849fcf7cb500-log-httpd\") pod \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.782999 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-sg-core-conf-yaml\") pod \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.783045 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-scripts\") pod \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.783082 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vw2n\" (UniqueName: \"kubernetes.io/projected/0391d8f9-2f67-416d-9a1a-849fcf7cb500-kube-api-access-2vw2n\") pod \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.784834 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0391d8f9-2f67-416d-9a1a-849fcf7cb500-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0391d8f9-2f67-416d-9a1a-849fcf7cb500" (UID: "0391d8f9-2f67-416d-9a1a-849fcf7cb500"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.786363 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0391d8f9-2f67-416d-9a1a-849fcf7cb500-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0391d8f9-2f67-416d-9a1a-849fcf7cb500" (UID: "0391d8f9-2f67-416d-9a1a-849fcf7cb500"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.791917 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0391d8f9-2f67-416d-9a1a-849fcf7cb500-kube-api-access-2vw2n" (OuterVolumeSpecName: "kube-api-access-2vw2n") pod "0391d8f9-2f67-416d-9a1a-849fcf7cb500" (UID: "0391d8f9-2f67-416d-9a1a-849fcf7cb500"). InnerVolumeSpecName "kube-api-access-2vw2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.795330 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-scripts" (OuterVolumeSpecName: "scripts") pod "0391d8f9-2f67-416d-9a1a-849fcf7cb500" (UID: "0391d8f9-2f67-416d-9a1a-849fcf7cb500"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.862542 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0391d8f9-2f67-416d-9a1a-849fcf7cb500" (UID: "0391d8f9-2f67-416d-9a1a-849fcf7cb500"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.902522 5024 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0391d8f9-2f67-416d-9a1a-849fcf7cb500-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.902568 5024 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0391d8f9-2f67-416d-9a1a-849fcf7cb500-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.902591 5024 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.902607 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.902625 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vw2n\" (UniqueName: \"kubernetes.io/projected/0391d8f9-2f67-416d-9a1a-849fcf7cb500-kube-api-access-2vw2n\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:29 crc kubenswrapper[5024]: I1128 17:27:29.954203 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0391d8f9-2f67-416d-9a1a-849fcf7cb500" (UID: "0391d8f9-2f67-416d-9a1a-849fcf7cb500"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.005396 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0391d8f9-2f67-416d-9a1a-849fcf7cb500" (UID: "0391d8f9-2f67-416d-9a1a-849fcf7cb500"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.006179 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-combined-ca-bundle\") pod \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\" (UID: \"0391d8f9-2f67-416d-9a1a-849fcf7cb500\") " Nov 28 17:27:30 crc kubenswrapper[5024]: W1128 17:27:30.006390 5024 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0391d8f9-2f67-416d-9a1a-849fcf7cb500/volumes/kubernetes.io~secret/combined-ca-bundle Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.006408 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0391d8f9-2f67-416d-9a1a-849fcf7cb500" (UID: "0391d8f9-2f67-416d-9a1a-849fcf7cb500"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.006922 5024 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.006940 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.060569 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-config-data" (OuterVolumeSpecName: "config-data") pod "0391d8f9-2f67-416d-9a1a-849fcf7cb500" (UID: "0391d8f9-2f67-416d-9a1a-849fcf7cb500"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.109144 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0391d8f9-2f67-416d-9a1a-849fcf7cb500-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.202534 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0391d8f9-2f67-416d-9a1a-849fcf7cb500","Type":"ContainerDied","Data":"3ec33dadc9f705fe96fbc82342eaf0269ccb7e906edd878fb7988f298a7d9c0d"} Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.202587 5024 scope.go:117] "RemoveContainer" containerID="f97b74a3b09f014f05efe6a017bec93724172b9c68d6a011882f7de6f09455a4" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.202825 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.249896 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.252465 5024 scope.go:117] "RemoveContainer" containerID="f2d13446a2387a99a8619c4cf10a4f126166964bae32594f2aacc7682df50410" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.274132 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.295216 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:27:30 crc kubenswrapper[5024]: E1128 17:27:30.295738 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="ceilometer-central-agent" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.295755 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="ceilometer-central-agent" Nov 28 17:27:30 crc kubenswrapper[5024]: E1128 17:27:30.299148 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="sg-core" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.299185 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="sg-core" Nov 28 17:27:30 crc kubenswrapper[5024]: E1128 17:27:30.299223 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="ceilometer-notification-agent" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.299231 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="ceilometer-notification-agent" Nov 28 17:27:30 crc kubenswrapper[5024]: E1128 17:27:30.299242 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="proxy-httpd" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.299250 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="proxy-httpd" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.307262 5024 scope.go:117] "RemoveContainer" containerID="0bb7c34e526f3a7b9013457cb485a2d25e7085e6f45262794a825ac398d4791a" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.310431 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="ceilometer-notification-agent" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.310491 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="sg-core" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.310509 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="proxy-httpd" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.310537 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" containerName="ceilometer-central-agent" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.317170 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.320882 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.321124 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.321527 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.348146 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.354228 5024 scope.go:117] "RemoveContainer" containerID="0b2fba5af1fae99f388f06bf30340d3887b04fad1b592f4fa47794818f6c63bb" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.418251 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-config-data\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.418312 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446db982-05e3-4131-aaf7-07e42b726565-run-httpd\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.422178 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.422300 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhp6d\" (UniqueName: \"kubernetes.io/projected/446db982-05e3-4131-aaf7-07e42b726565-kube-api-access-zhp6d\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.422335 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-scripts\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.422354 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.422431 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.422472 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446db982-05e3-4131-aaf7-07e42b726565-log-httpd\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.515004 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0391d8f9-2f67-416d-9a1a-849fcf7cb500" path="/var/lib/kubelet/pods/0391d8f9-2f67-416d-9a1a-849fcf7cb500/volumes" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.533716 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-config-data\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.533769 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446db982-05e3-4131-aaf7-07e42b726565-run-httpd\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.533856 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.533900 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhp6d\" (UniqueName: \"kubernetes.io/projected/446db982-05e3-4131-aaf7-07e42b726565-kube-api-access-zhp6d\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.533926 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.533944 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-scripts\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.533980 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.534001 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446db982-05e3-4131-aaf7-07e42b726565-log-httpd\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.534619 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446db982-05e3-4131-aaf7-07e42b726565-log-httpd\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.534850 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/446db982-05e3-4131-aaf7-07e42b726565-run-httpd\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.538469 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.539164 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-scripts\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.539310 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.539622 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-config-data\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.542504 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/446db982-05e3-4131-aaf7-07e42b726565-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.560400 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhp6d\" (UniqueName: \"kubernetes.io/projected/446db982-05e3-4131-aaf7-07e42b726565-kube-api-access-zhp6d\") pod \"ceilometer-0\" (UID: \"446db982-05e3-4131-aaf7-07e42b726565\") " pod="openstack/ceilometer-0" Nov 28 17:27:30 crc kubenswrapper[5024]: I1128 17:27:30.640736 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:27:31 crc kubenswrapper[5024]: I1128 17:27:31.439940 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:27:31 crc kubenswrapper[5024]: W1128 17:27:31.447861 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod446db982_05e3_4131_aaf7_07e42b726565.slice/crio-b71695b6e768a015f1ddef4396c756f913da3473b397f7d7a1e7b39b283e0dfc WatchSource:0}: Error finding container b71695b6e768a015f1ddef4396c756f913da3473b397f7d7a1e7b39b283e0dfc: Status 404 returned error can't find the container with id b71695b6e768a015f1ddef4396c756f913da3473b397f7d7a1e7b39b283e0dfc Nov 28 17:27:32 crc kubenswrapper[5024]: I1128 17:27:32.239902 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446db982-05e3-4131-aaf7-07e42b726565","Type":"ContainerStarted","Data":"b71695b6e768a015f1ddef4396c756f913da3473b397f7d7a1e7b39b283e0dfc"} Nov 28 17:27:32 crc kubenswrapper[5024]: I1128 17:27:32.439704 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8a996fd8-35ac-41d9-a490-71dc31fa0686" containerName="rabbitmq" containerID="cri-o://1c04bd302d66be42cdcba39a29ea4cd5ba7672183ac7b7d67961cbbd0d65032b" gracePeriod=604795 Nov 28 17:27:33 crc kubenswrapper[5024]: I1128 17:27:33.274874 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="77c4107c-2b4b-46f2-bf47-ccf384504fb1" containerName="rabbitmq" containerID="cri-o://9021ac8633b92acd690b0c8d7fd0ed0c5282b11539876fd1592284fbf1565145" gracePeriod=604796 Nov 28 17:27:33 crc kubenswrapper[5024]: I1128 17:27:33.644897 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8a996fd8-35ac-41d9-a490-71dc31fa0686" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 28 17:27:33 crc kubenswrapper[5024]: I1128 17:27:33.996477 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="77c4107c-2b4b-46f2-bf47-ccf384504fb1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Nov 28 17:27:38 crc kubenswrapper[5024]: E1128 17:27:38.842544 5024 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.141:35310->38.129.56.141:40169: read tcp 38.129.56.141:35310->38.129.56.141:40169: read: connection reset by peer Nov 28 17:27:38 crc kubenswrapper[5024]: I1128 17:27:38.851908 5024 scope.go:117] "RemoveContainer" containerID="cbd840d182f848c421656b0710596616878591dc5a5c9cd3541b49ea8670a7dc" Nov 28 17:27:40 crc kubenswrapper[5024]: I1128 17:27:40.370386 5024 generic.go:334] "Generic (PLEG): container finished" podID="8a996fd8-35ac-41d9-a490-71dc31fa0686" containerID="1c04bd302d66be42cdcba39a29ea4cd5ba7672183ac7b7d67961cbbd0d65032b" exitCode=0 Nov 28 17:27:40 crc kubenswrapper[5024]: I1128 17:27:40.370473 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a996fd8-35ac-41d9-a490-71dc31fa0686","Type":"ContainerDied","Data":"1c04bd302d66be42cdcba39a29ea4cd5ba7672183ac7b7d67961cbbd0d65032b"} Nov 28 17:27:40 crc kubenswrapper[5024]: I1128 17:27:40.372939 5024 generic.go:334] "Generic (PLEG): container finished" podID="77c4107c-2b4b-46f2-bf47-ccf384504fb1" containerID="9021ac8633b92acd690b0c8d7fd0ed0c5282b11539876fd1592284fbf1565145" exitCode=0 Nov 28 17:27:40 crc kubenswrapper[5024]: I1128 17:27:40.372975 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"77c4107c-2b4b-46f2-bf47-ccf384504fb1","Type":"ContainerDied","Data":"9021ac8633b92acd690b0c8d7fd0ed0c5282b11539876fd1592284fbf1565145"} Nov 28 17:27:41 crc kubenswrapper[5024]: I1128 17:27:41.764263 5024 scope.go:117] "RemoveContainer" containerID="9c802141beeee8c7aa00167fad6f387352bfda3be061fedca99b2b6ae02f1322" Nov 28 17:27:43 crc kubenswrapper[5024]: I1128 17:27:43.498607 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:27:43 crc kubenswrapper[5024]: E1128 17:27:43.499459 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.447222 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-tjcpt"] Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.461076 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.464260 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.568201 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.582113 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-tjcpt"] Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.582416 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"77c4107c-2b4b-46f2-bf47-ccf384504fb1","Type":"ContainerDied","Data":"cefbf33eb3799f04361bb7c6cc2517ff004a2fb67263fb303c6defc7d329ab7c"} Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.582533 5024 scope.go:117] "RemoveContainer" containerID="9021ac8633b92acd690b0c8d7fd0ed0c5282b11539876fd1592284fbf1565145" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.587255 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.587334 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t76zk\" (UniqueName: \"kubernetes.io/projected/55923d04-26e1-4f09-a64b-45c188ca346a-kube-api-access-t76zk\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.587390 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.587418 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.587442 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.587466 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-config\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.587605 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.692575 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77c4107c-2b4b-46f2-bf47-ccf384504fb1-erlang-cookie-secret\") pod \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.692930 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-config-data\") pod \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.693044 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.693066 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-plugins-conf\") pod \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.693165 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-erlang-cookie\") pod \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.693231 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krsc7\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-kube-api-access-krsc7\") pod \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.693291 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77c4107c-2b4b-46f2-bf47-ccf384504fb1-pod-info\") pod \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.693309 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-tls\") pod \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.693334 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-plugins\") pod \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.693350 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-confd\") pod \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.693402 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-server-conf\") pod \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\" (UID: \"77c4107c-2b4b-46f2-bf47-ccf384504fb1\") " Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.696517 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-config\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.696845 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.696862 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "77c4107c-2b4b-46f2-bf47-ccf384504fb1" (UID: "77c4107c-2b4b-46f2-bf47-ccf384504fb1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.697170 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.697280 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t76zk\" (UniqueName: \"kubernetes.io/projected/55923d04-26e1-4f09-a64b-45c188ca346a-kube-api-access-t76zk\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.697377 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.697427 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.697452 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.697520 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.698289 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.698492 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "77c4107c-2b4b-46f2-bf47-ccf384504fb1" (UID: "77c4107c-2b4b-46f2-bf47-ccf384504fb1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.698577 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "77c4107c-2b4b-46f2-bf47-ccf384504fb1" (UID: "77c4107c-2b4b-46f2-bf47-ccf384504fb1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.706193 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.707906 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-config\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.708626 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.709134 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "77c4107c-2b4b-46f2-bf47-ccf384504fb1" (UID: "77c4107c-2b4b-46f2-bf47-ccf384504fb1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.709201 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "77c4107c-2b4b-46f2-bf47-ccf384504fb1" (UID: "77c4107c-2b4b-46f2-bf47-ccf384504fb1"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.710842 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.711767 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/77c4107c-2b4b-46f2-bf47-ccf384504fb1-pod-info" (OuterVolumeSpecName: "pod-info") pod "77c4107c-2b4b-46f2-bf47-ccf384504fb1" (UID: "77c4107c-2b4b-46f2-bf47-ccf384504fb1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.712172 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-kube-api-access-krsc7" (OuterVolumeSpecName: "kube-api-access-krsc7") pod "77c4107c-2b4b-46f2-bf47-ccf384504fb1" (UID: "77c4107c-2b4b-46f2-bf47-ccf384504fb1"). InnerVolumeSpecName "kube-api-access-krsc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.712432 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.718173 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77c4107c-2b4b-46f2-bf47-ccf384504fb1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "77c4107c-2b4b-46f2-bf47-ccf384504fb1" (UID: "77c4107c-2b4b-46f2-bf47-ccf384504fb1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.764127 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t76zk\" (UniqueName: \"kubernetes.io/projected/55923d04-26e1-4f09-a64b-45c188ca346a-kube-api-access-t76zk\") pod \"dnsmasq-dns-7d84b4d45c-tjcpt\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.790110 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-config-data" (OuterVolumeSpecName: "config-data") pod "77c4107c-2b4b-46f2-bf47-ccf384504fb1" (UID: "77c4107c-2b4b-46f2-bf47-ccf384504fb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.802289 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krsc7\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-kube-api-access-krsc7\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.802621 5024 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77c4107c-2b4b-46f2-bf47-ccf384504fb1-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.802700 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.802774 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.802835 5024 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77c4107c-2b4b-46f2-bf47-ccf384504fb1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.802899 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.802979 5024 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.803061 5024 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.849117 5024 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.862518 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-server-conf" (OuterVolumeSpecName: "server-conf") pod "77c4107c-2b4b-46f2-bf47-ccf384504fb1" (UID: "77c4107c-2b4b-46f2-bf47-ccf384504fb1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.894834 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.909158 5024 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77c4107c-2b4b-46f2-bf47-ccf384504fb1-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.909189 5024 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:46 crc kubenswrapper[5024]: I1128 17:27:46.947507 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "77c4107c-2b4b-46f2-bf47-ccf384504fb1" (UID: "77c4107c-2b4b-46f2-bf47-ccf384504fb1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.012161 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77c4107c-2b4b-46f2-bf47-ccf384504fb1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.250356 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 17:27:47 crc kubenswrapper[5024]: E1128 17:27:47.338195 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 28 17:27:47 crc kubenswrapper[5024]: E1128 17:27:47.338264 5024 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 28 17:27:47 crc kubenswrapper[5024]: E1128 17:27:47.338439 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hlb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-hbbrk_openstack(3a7fb5de-075a-4c27-a648-e6762bd7c941): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:27:47 crc kubenswrapper[5024]: E1128 17:27:47.342091 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-hbbrk" podUID="3a7fb5de-075a-4c27-a648-e6762bd7c941" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.443504 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a996fd8-35ac-41d9-a490-71dc31fa0686-pod-info\") pod \"8a996fd8-35ac-41d9-a490-71dc31fa0686\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.443643 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-erlang-cookie\") pod \"8a996fd8-35ac-41d9-a490-71dc31fa0686\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.443732 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-confd\") pod \"8a996fd8-35ac-41d9-a490-71dc31fa0686\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.443751 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-server-conf\") pod \"8a996fd8-35ac-41d9-a490-71dc31fa0686\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.443836 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-config-data\") pod \"8a996fd8-35ac-41d9-a490-71dc31fa0686\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.443964 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-tls\") pod \"8a996fd8-35ac-41d9-a490-71dc31fa0686\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.444059 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvsx4\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-kube-api-access-zvsx4\") pod \"8a996fd8-35ac-41d9-a490-71dc31fa0686\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.444125 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-plugins-conf\") pod \"8a996fd8-35ac-41d9-a490-71dc31fa0686\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.444160 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a996fd8-35ac-41d9-a490-71dc31fa0686-erlang-cookie-secret\") pod \"8a996fd8-35ac-41d9-a490-71dc31fa0686\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.444200 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-plugins\") pod \"8a996fd8-35ac-41d9-a490-71dc31fa0686\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.444217 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"8a996fd8-35ac-41d9-a490-71dc31fa0686\" (UID: \"8a996fd8-35ac-41d9-a490-71dc31fa0686\") " Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.449521 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "8a996fd8-35ac-41d9-a490-71dc31fa0686" (UID: "8a996fd8-35ac-41d9-a490-71dc31fa0686"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.452108 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8a996fd8-35ac-41d9-a490-71dc31fa0686" (UID: "8a996fd8-35ac-41d9-a490-71dc31fa0686"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.457631 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8a996fd8-35ac-41d9-a490-71dc31fa0686-pod-info" (OuterVolumeSpecName: "pod-info") pod "8a996fd8-35ac-41d9-a490-71dc31fa0686" (UID: "8a996fd8-35ac-41d9-a490-71dc31fa0686"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.464668 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8a996fd8-35ac-41d9-a490-71dc31fa0686" (UID: "8a996fd8-35ac-41d9-a490-71dc31fa0686"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.473556 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a996fd8-35ac-41d9-a490-71dc31fa0686-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8a996fd8-35ac-41d9-a490-71dc31fa0686" (UID: "8a996fd8-35ac-41d9-a490-71dc31fa0686"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.479686 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-kube-api-access-zvsx4" (OuterVolumeSpecName: "kube-api-access-zvsx4") pod "8a996fd8-35ac-41d9-a490-71dc31fa0686" (UID: "8a996fd8-35ac-41d9-a490-71dc31fa0686"). InnerVolumeSpecName "kube-api-access-zvsx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.512265 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8a996fd8-35ac-41d9-a490-71dc31fa0686" (UID: "8a996fd8-35ac-41d9-a490-71dc31fa0686"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.512716 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8a996fd8-35ac-41d9-a490-71dc31fa0686" (UID: "8a996fd8-35ac-41d9-a490-71dc31fa0686"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.557285 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.557331 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvsx4\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-kube-api-access-zvsx4\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.557348 5024 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.557359 5024 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a996fd8-35ac-41d9-a490-71dc31fa0686-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.557370 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.557405 5024 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.557416 5024 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a996fd8-35ac-41d9-a490-71dc31fa0686-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.557429 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.573572 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-config-data" (OuterVolumeSpecName: "config-data") pod "8a996fd8-35ac-41d9-a490-71dc31fa0686" (UID: "8a996fd8-35ac-41d9-a490-71dc31fa0686"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.661130 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a996fd8-35ac-41d9-a490-71dc31fa0686","Type":"ContainerDied","Data":"441a536cbc861803f5928c6671a3a0177140c907f0f10a4da7b17925a0dea82f"} Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.661174 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.661221 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.684368 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:47 crc kubenswrapper[5024]: E1128 17:27:47.699537 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-hbbrk" podUID="3a7fb5de-075a-4c27-a648-e6762bd7c941" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.761875 5024 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.788184 5024 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.801047 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-server-conf" (OuterVolumeSpecName: "server-conf") pod "8a996fd8-35ac-41d9-a490-71dc31fa0686" (UID: "8a996fd8-35ac-41d9-a490-71dc31fa0686"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.803504 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.837196 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.964287 5024 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a996fd8-35ac-41d9-a490-71dc31fa0686-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.987867 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:27:47 crc kubenswrapper[5024]: E1128 17:27:47.988590 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a996fd8-35ac-41d9-a490-71dc31fa0686" containerName="rabbitmq" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.988616 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a996fd8-35ac-41d9-a490-71dc31fa0686" containerName="rabbitmq" Nov 28 17:27:47 crc kubenswrapper[5024]: E1128 17:27:47.988629 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c4107c-2b4b-46f2-bf47-ccf384504fb1" containerName="setup-container" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.988637 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c4107c-2b4b-46f2-bf47-ccf384504fb1" containerName="setup-container" Nov 28 17:27:47 crc kubenswrapper[5024]: E1128 17:27:47.988677 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a996fd8-35ac-41d9-a490-71dc31fa0686" containerName="setup-container" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.988687 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a996fd8-35ac-41d9-a490-71dc31fa0686" containerName="setup-container" Nov 28 17:27:47 crc kubenswrapper[5024]: E1128 17:27:47.988712 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c4107c-2b4b-46f2-bf47-ccf384504fb1" containerName="rabbitmq" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.988720 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c4107c-2b4b-46f2-bf47-ccf384504fb1" containerName="rabbitmq" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.989113 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a996fd8-35ac-41d9-a490-71dc31fa0686" containerName="rabbitmq" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.989154 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="77c4107c-2b4b-46f2-bf47-ccf384504fb1" containerName="rabbitmq" Nov 28 17:27:47 crc kubenswrapper[5024]: I1128 17:27:47.991297 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.000567 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.000927 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.011342 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.014491 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.014637 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8a996fd8-35ac-41d9-a490-71dc31fa0686" (UID: "8a996fd8-35ac-41d9-a490-71dc31fa0686"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.014848 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tvj4k" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.014889 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.015236 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.015453 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.065726 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.065808 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.065889 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.065919 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.065962 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.066004 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.066056 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.066120 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.066172 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.066205 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmpd9\" (UniqueName: \"kubernetes.io/projected/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-kube-api-access-pmpd9\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.066238 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.066361 5024 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a996fd8-35ac-41d9-a490-71dc31fa0686-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.168405 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmpd9\" (UniqueName: \"kubernetes.io/projected/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-kube-api-access-pmpd9\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.168452 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.168534 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.168570 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.168619 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.168693 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.169006 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.169155 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.169185 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.169228 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.169264 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.169750 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.169833 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.172552 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.173850 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.174248 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.174920 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.176757 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.177828 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.178213 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.184519 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.187107 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmpd9\" (UniqueName: \"kubernetes.io/projected/0fae95bc-19b8-4274-ab02-cc6ebf195fe7-kube-api-access-pmpd9\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.228576 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fae95bc-19b8-4274-ab02-cc6ebf195fe7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.347817 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.349913 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.388816 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.420967 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.423337 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.427565 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.427693 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rl4vn" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.427744 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.427762 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.427834 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.430321 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.431754 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.446433 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.518191 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77c4107c-2b4b-46f2-bf47-ccf384504fb1" path="/var/lib/kubelet/pods/77c4107c-2b4b-46f2-bf47-ccf384504fb1/volumes" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.521172 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a996fd8-35ac-41d9-a490-71dc31fa0686" path="/var/lib/kubelet/pods/8a996fd8-35ac-41d9-a490-71dc31fa0686/volumes" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.584838 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81a9271f-4842-4922-a19f-11de21871c68-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.584913 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81a9271f-4842-4922-a19f-11de21871c68-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.585083 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81a9271f-4842-4922-a19f-11de21871c68-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.585354 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff8sv\" (UniqueName: \"kubernetes.io/projected/81a9271f-4842-4922-a19f-11de21871c68-kube-api-access-ff8sv\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.585753 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81a9271f-4842-4922-a19f-11de21871c68-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.585901 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81a9271f-4842-4922-a19f-11de21871c68-server-conf\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.585944 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81a9271f-4842-4922-a19f-11de21871c68-pod-info\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.586321 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81a9271f-4842-4922-a19f-11de21871c68-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.586409 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.586445 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81a9271f-4842-4922-a19f-11de21871c68-config-data\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.586536 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81a9271f-4842-4922-a19f-11de21871c68-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.644666 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8a996fd8-35ac-41d9-a490-71dc31fa0686" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: i/o timeout" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.689333 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81a9271f-4842-4922-a19f-11de21871c68-pod-info\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.689454 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81a9271f-4842-4922-a19f-11de21871c68-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.689499 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.689524 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81a9271f-4842-4922-a19f-11de21871c68-config-data\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.689561 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81a9271f-4842-4922-a19f-11de21871c68-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.689601 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81a9271f-4842-4922-a19f-11de21871c68-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.689643 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81a9271f-4842-4922-a19f-11de21871c68-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.689690 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81a9271f-4842-4922-a19f-11de21871c68-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.689734 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff8sv\" (UniqueName: \"kubernetes.io/projected/81a9271f-4842-4922-a19f-11de21871c68-kube-api-access-ff8sv\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.689728 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.690104 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81a9271f-4842-4922-a19f-11de21871c68-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.690353 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81a9271f-4842-4922-a19f-11de21871c68-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.690436 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81a9271f-4842-4922-a19f-11de21871c68-server-conf\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.691872 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81a9271f-4842-4922-a19f-11de21871c68-server-conf\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.692632 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81a9271f-4842-4922-a19f-11de21871c68-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.692917 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81a9271f-4842-4922-a19f-11de21871c68-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.695759 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81a9271f-4842-4922-a19f-11de21871c68-pod-info\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.696709 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81a9271f-4842-4922-a19f-11de21871c68-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.708489 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81a9271f-4842-4922-a19f-11de21871c68-config-data\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.715653 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81a9271f-4842-4922-a19f-11de21871c68-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.717746 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81a9271f-4842-4922-a19f-11de21871c68-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.734118 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff8sv\" (UniqueName: \"kubernetes.io/projected/81a9271f-4842-4922-a19f-11de21871c68-kube-api-access-ff8sv\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.766081 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"81a9271f-4842-4922-a19f-11de21871c68\") " pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.783659 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 17:27:48 crc kubenswrapper[5024]: I1128 17:27:48.996719 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="77c4107c-2b4b-46f2-bf47-ccf384504fb1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: i/o timeout" Nov 28 17:27:52 crc kubenswrapper[5024]: I1128 17:27:52.067446 5024 scope.go:117] "RemoveContainer" containerID="c3b5a1aa90443da628b90d142e2f8a9bccbde23e09a695bbc71f26b48cf035f4" Nov 28 17:27:52 crc kubenswrapper[5024]: I1128 17:27:52.083593 5024 scope.go:117] "RemoveContainer" containerID="b41560ff1c9095e5c76c904102f2614192b2323b7c5a0a7e0ea7b0b8808bed08" Nov 28 17:27:52 crc kubenswrapper[5024]: E1128 17:27:52.454981 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Nov 28 17:27:52 crc kubenswrapper[5024]: E1128 17:27:52.455066 5024 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Nov 28 17:27:52 crc kubenswrapper[5024]: E1128 17:27:52.455271 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56h5c7h59fhcbh66ch59h55ch89h57h9hf5h644h588h5f4h55fh695h567h588h74h684h546hc5h98h66dh65fhdh689h649h666hbh547h66fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhp6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(446db982-05e3-4131-aaf7-07e42b726565): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:27:52 crc kubenswrapper[5024]: I1128 17:27:52.477914 5024 scope.go:117] "RemoveContainer" containerID="1c04bd302d66be42cdcba39a29ea4cd5ba7672183ac7b7d67961cbbd0d65032b" Nov 28 17:27:52 crc kubenswrapper[5024]: I1128 17:27:52.549703 5024 scope.go:117] "RemoveContainer" containerID="c3b5a1aa90443da628b90d142e2f8a9bccbde23e09a695bbc71f26b48cf035f4" Nov 28 17:27:52 crc kubenswrapper[5024]: E1128 17:27:52.552999 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3b5a1aa90443da628b90d142e2f8a9bccbde23e09a695bbc71f26b48cf035f4\": container with ID starting with c3b5a1aa90443da628b90d142e2f8a9bccbde23e09a695bbc71f26b48cf035f4 not found: ID does not exist" containerID="c3b5a1aa90443da628b90d142e2f8a9bccbde23e09a695bbc71f26b48cf035f4" Nov 28 17:27:52 crc kubenswrapper[5024]: E1128 17:27:52.556915 5024 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"c3b5a1aa90443da628b90d142e2f8a9bccbde23e09a695bbc71f26b48cf035f4\": rpc error: code = NotFound desc = could not find container \"c3b5a1aa90443da628b90d142e2f8a9bccbde23e09a695bbc71f26b48cf035f4\": container with ID starting with c3b5a1aa90443da628b90d142e2f8a9bccbde23e09a695bbc71f26b48cf035f4 not found: ID does not exist" containerID="c3b5a1aa90443da628b90d142e2f8a9bccbde23e09a695bbc71f26b48cf035f4" Nov 28 17:27:52 crc kubenswrapper[5024]: I1128 17:27:52.556993 5024 scope.go:117] "RemoveContainer" containerID="ffcd751d53cca8b8d9f971963f6fa36719c4c67e8f0760d606fb4add08d13c45" Nov 28 17:27:52 crc kubenswrapper[5024]: I1128 17:27:52.628705 5024 scope.go:117] "RemoveContainer" containerID="2f6b28b4e0fe7ad569560c585bb13a5380c148687f58ad9278aaa037f4e7db11" Nov 28 17:27:52 crc kubenswrapper[5024]: I1128 17:27:52.697477 5024 scope.go:117] "RemoveContainer" containerID="2f6b28b4e0fe7ad569560c585bb13a5380c148687f58ad9278aaa037f4e7db11" Nov 28 17:27:52 crc kubenswrapper[5024]: E1128 17:27:52.723447 5024 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_setup-container_rabbitmq-server-0_openstack_8a996fd8-35ac-41d9-a490-71dc31fa0686_0 in pod sandbox 441a536cbc861803f5928c6671a3a0177140c907f0f10a4da7b17925a0dea82f from index: no such id: '2f6b28b4e0fe7ad569560c585bb13a5380c148687f58ad9278aaa037f4e7db11'" containerID="2f6b28b4e0fe7ad569560c585bb13a5380c148687f58ad9278aaa037f4e7db11" Nov 28 17:27:52 crc kubenswrapper[5024]: I1128 17:27:52.723514 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f6b28b4e0fe7ad569560c585bb13a5380c148687f58ad9278aaa037f4e7db11"} err="rpc error: code = Unknown desc = failed to delete container k8s_setup-container_rabbitmq-server-0_openstack_8a996fd8-35ac-41d9-a490-71dc31fa0686_0 in pod sandbox 441a536cbc861803f5928c6671a3a0177140c907f0f10a4da7b17925a0dea82f from index: no such id: '2f6b28b4e0fe7ad569560c585bb13a5380c148687f58ad9278aaa037f4e7db11'" Nov 28 17:27:53 crc kubenswrapper[5024]: I1128 17:27:53.152367 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-tjcpt"] Nov 28 17:27:53 crc kubenswrapper[5024]: I1128 17:27:53.166385 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:27:53 crc kubenswrapper[5024]: W1128 17:27:53.178298 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81a9271f_4842_4922_a19f_11de21871c68.slice/crio-02e81c03e326066e1ff1e0a6ae749726e628e067fd5850a71847caced48223dd WatchSource:0}: Error finding container 02e81c03e326066e1ff1e0a6ae749726e628e067fd5850a71847caced48223dd: Status 404 returned error can't find the container with id 02e81c03e326066e1ff1e0a6ae749726e628e067fd5850a71847caced48223dd Nov 28 17:27:53 crc kubenswrapper[5024]: I1128 17:27:53.291264 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:27:53 crc kubenswrapper[5024]: W1128 17:27:53.295278 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fae95bc_19b8_4274_ab02_cc6ebf195fe7.slice/crio-1fbc0a2483416f0929c09f29101f98cf57120ca14da210cdcb2966702ee27813 WatchSource:0}: Error finding container 1fbc0a2483416f0929c09f29101f98cf57120ca14da210cdcb2966702ee27813: Status 404 returned error can't find the container with id 1fbc0a2483416f0929c09f29101f98cf57120ca14da210cdcb2966702ee27813 Nov 28 17:27:53 crc kubenswrapper[5024]: I1128 17:27:53.746909 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"81a9271f-4842-4922-a19f-11de21871c68","Type":"ContainerStarted","Data":"02e81c03e326066e1ff1e0a6ae749726e628e067fd5850a71847caced48223dd"} Nov 28 17:27:53 crc kubenswrapper[5024]: I1128 17:27:53.748619 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" event={"ID":"55923d04-26e1-4f09-a64b-45c188ca346a","Type":"ContainerStarted","Data":"8e53e6aedd2fe1337b6973911d5bfa26f3ec95694af2c11371a69979bcc8cbdb"} Nov 28 17:27:53 crc kubenswrapper[5024]: I1128 17:27:53.750349 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446db982-05e3-4131-aaf7-07e42b726565","Type":"ContainerStarted","Data":"710ba62309412b42f353096b2fc565306ca3f4a9716e42289e8c80a14ccd1f1b"} Nov 28 17:27:53 crc kubenswrapper[5024]: I1128 17:27:53.751801 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0fae95bc-19b8-4274-ab02-cc6ebf195fe7","Type":"ContainerStarted","Data":"1fbc0a2483416f0929c09f29101f98cf57120ca14da210cdcb2966702ee27813"} Nov 28 17:27:54 crc kubenswrapper[5024]: I1128 17:27:54.771186 5024 generic.go:334] "Generic (PLEG): container finished" podID="55923d04-26e1-4f09-a64b-45c188ca346a" containerID="1e45b2c0aa399e77eb6353b4bcc4a3dbdcb25c9796b2a2aaff7596926729a233" exitCode=0 Nov 28 17:27:54 crc kubenswrapper[5024]: I1128 17:27:54.771423 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" event={"ID":"55923d04-26e1-4f09-a64b-45c188ca346a","Type":"ContainerDied","Data":"1e45b2c0aa399e77eb6353b4bcc4a3dbdcb25c9796b2a2aaff7596926729a233"} Nov 28 17:27:54 crc kubenswrapper[5024]: I1128 17:27:54.775557 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446db982-05e3-4131-aaf7-07e42b726565","Type":"ContainerStarted","Data":"bf64ae978ac58397d4d501d4e40fabad0f74ec40bd99848e20203b14a24eb7a8"} Nov 28 17:27:55 crc kubenswrapper[5024]: I1128 17:27:55.789286 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"81a9271f-4842-4922-a19f-11de21871c68","Type":"ContainerStarted","Data":"df5fda9c277f040a69d49b71b690926cf7faca65e175dcb2595cebd7f649c4e6"} Nov 28 17:27:55 crc kubenswrapper[5024]: I1128 17:27:55.794331 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" event={"ID":"55923d04-26e1-4f09-a64b-45c188ca346a","Type":"ContainerStarted","Data":"65d579d5dc0f8ae31c0b48a29aab242d1a8424cd0a57365ca0022ecfed475750"} Nov 28 17:27:57 crc kubenswrapper[5024]: I1128 17:27:57.816378 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0fae95bc-19b8-4274-ab02-cc6ebf195fe7","Type":"ContainerStarted","Data":"8f311492a5f1dff7df1125ba759ce9d5f82275f573e08ec2d613f32974f38bf3"} Nov 28 17:27:57 crc kubenswrapper[5024]: I1128 17:27:57.817991 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:27:57 crc kubenswrapper[5024]: E1128 17:27:57.833650 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="446db982-05e3-4131-aaf7-07e42b726565" Nov 28 17:27:57 crc kubenswrapper[5024]: I1128 17:27:57.869231 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" podStartSLOduration=11.869205235999999 podStartE2EDuration="11.869205236s" podCreationTimestamp="2025-11-28 17:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:27:57.862729578 +0000 UTC m=+1779.911650483" watchObservedRunningTime="2025-11-28 17:27:57.869205236 +0000 UTC m=+1779.918126171" Nov 28 17:27:58 crc kubenswrapper[5024]: I1128 17:27:58.507305 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:27:58 crc kubenswrapper[5024]: E1128 17:27:58.507623 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:27:58 crc kubenswrapper[5024]: I1128 17:27:58.866811 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446db982-05e3-4131-aaf7-07e42b726565","Type":"ContainerStarted","Data":"424eb0494c2a1c93196ad44455bcc690b4c39cd72d5df20c32b2292ea5d5cea5"} Nov 28 17:27:58 crc kubenswrapper[5024]: E1128 17:27:58.867977 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="446db982-05e3-4131-aaf7-07e42b726565" Nov 28 17:27:58 crc kubenswrapper[5024]: I1128 17:27:58.871734 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:27:59 crc kubenswrapper[5024]: E1128 17:27:59.876446 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="446db982-05e3-4131-aaf7-07e42b726565" Nov 28 17:28:00 crc kubenswrapper[5024]: E1128 17:28:00.887618 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="446db982-05e3-4131-aaf7-07e42b726565" Nov 28 17:28:01 crc kubenswrapper[5024]: I1128 17:28:01.897228 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:28:01 crc kubenswrapper[5024]: I1128 17:28:01.961513 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-8p964"] Nov 28 17:28:01 crc kubenswrapper[5024]: I1128 17:28:01.961758 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" podUID="7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" containerName="dnsmasq-dns" containerID="cri-o://1980af0c961613437f8f3e2d92132589eb9fb79454bdd40ac383c730fa0e8fe6" gracePeriod=10 Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.115420 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-gbngz"] Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.121070 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.157756 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-gbngz"] Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.281429 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.281486 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.281523 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.281578 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-config\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.281930 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.282201 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brfj\" (UniqueName: \"kubernetes.io/projected/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-kube-api-access-4brfj\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.282357 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.389296 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.389352 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-config\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.389511 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.389618 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4brfj\" (UniqueName: \"kubernetes.io/projected/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-kube-api-access-4brfj\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.389657 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.389875 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.389920 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.390749 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.390770 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-config\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.391413 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.392062 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.392985 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.393278 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.412107 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4brfj\" (UniqueName: \"kubernetes.io/projected/b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1-kube-api-access-4brfj\") pod \"dnsmasq-dns-6f6df4f56c-gbngz\" (UID: \"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.494366 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.662456 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.807168 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-ovsdbserver-nb\") pod \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.807268 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-ovsdbserver-sb\") pod \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.807367 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-config\") pod \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.807392 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-dns-swift-storage-0\") pod \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.807413 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-dns-svc\") pod \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.807442 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2nx2\" (UniqueName: \"kubernetes.io/projected/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-kube-api-access-j2nx2\") pod \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\" (UID: \"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6\") " Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.831882 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-kube-api-access-j2nx2" (OuterVolumeSpecName: "kube-api-access-j2nx2") pod "7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" (UID: "7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6"). InnerVolumeSpecName "kube-api-access-j2nx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.883591 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" (UID: "7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.885629 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" (UID: "7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.910302 5024 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.910356 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2nx2\" (UniqueName: \"kubernetes.io/projected/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-kube-api-access-j2nx2\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.910369 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.911222 5024 generic.go:334] "Generic (PLEG): container finished" podID="7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" containerID="1980af0c961613437f8f3e2d92132589eb9fb79454bdd40ac383c730fa0e8fe6" exitCode=0 Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.911257 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" event={"ID":"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6","Type":"ContainerDied","Data":"1980af0c961613437f8f3e2d92132589eb9fb79454bdd40ac383c730fa0e8fe6"} Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.911309 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" event={"ID":"7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6","Type":"ContainerDied","Data":"28e35d444ed2064a81d740427f8ac4f5af7add46e2c3c6dd3531265d3b062c32"} Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.911329 5024 scope.go:117] "RemoveContainer" containerID="1980af0c961613437f8f3e2d92132589eb9fb79454bdd40ac383c730fa0e8fe6" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.911476 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8p964" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.917599 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" (UID: "7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.919228 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-config" (OuterVolumeSpecName: "config") pod "7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" (UID: "7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:28:02 crc kubenswrapper[5024]: I1128 17:28:02.920849 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" (UID: "7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:28:03 crc kubenswrapper[5024]: I1128 17:28:03.017748 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:03 crc kubenswrapper[5024]: I1128 17:28:03.017789 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:03 crc kubenswrapper[5024]: I1128 17:28:03.017801 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:03 crc kubenswrapper[5024]: I1128 17:28:03.074633 5024 scope.go:117] "RemoveContainer" containerID="5ce8e26427e63d7007809d45640c78adc3775dbaf98d596992330a7b86bf527b" Nov 28 17:28:03 crc kubenswrapper[5024]: I1128 17:28:03.115312 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-gbngz"] Nov 28 17:28:03 crc kubenswrapper[5024]: W1128 17:28:03.121262 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9a5c65a_9917_497c_9a75_ce5ccf0a6ed1.slice/crio-6f317ccafe722695a82990a4cc9dff934190f3a3985c2012460d1e9b216fb6eb WatchSource:0}: Error finding container 6f317ccafe722695a82990a4cc9dff934190f3a3985c2012460d1e9b216fb6eb: Status 404 returned error can't find the container with id 6f317ccafe722695a82990a4cc9dff934190f3a3985c2012460d1e9b216fb6eb Nov 28 17:28:03 crc kubenswrapper[5024]: I1128 17:28:03.454254 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-8p964"] Nov 28 17:28:03 crc kubenswrapper[5024]: I1128 17:28:03.466850 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-8p964"] Nov 28 17:28:03 crc kubenswrapper[5024]: I1128 17:28:03.747467 5024 scope.go:117] "RemoveContainer" containerID="1980af0c961613437f8f3e2d92132589eb9fb79454bdd40ac383c730fa0e8fe6" Nov 28 17:28:03 crc kubenswrapper[5024]: E1128 17:28:03.747921 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1980af0c961613437f8f3e2d92132589eb9fb79454bdd40ac383c730fa0e8fe6\": container with ID starting with 1980af0c961613437f8f3e2d92132589eb9fb79454bdd40ac383c730fa0e8fe6 not found: ID does not exist" containerID="1980af0c961613437f8f3e2d92132589eb9fb79454bdd40ac383c730fa0e8fe6" Nov 28 17:28:03 crc kubenswrapper[5024]: I1128 17:28:03.747977 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1980af0c961613437f8f3e2d92132589eb9fb79454bdd40ac383c730fa0e8fe6"} err="failed to get container status \"1980af0c961613437f8f3e2d92132589eb9fb79454bdd40ac383c730fa0e8fe6\": rpc error: code = NotFound desc = could not find container \"1980af0c961613437f8f3e2d92132589eb9fb79454bdd40ac383c730fa0e8fe6\": container with ID starting with 1980af0c961613437f8f3e2d92132589eb9fb79454bdd40ac383c730fa0e8fe6 not found: ID does not exist" Nov 28 17:28:03 crc kubenswrapper[5024]: I1128 17:28:03.748006 5024 scope.go:117] "RemoveContainer" containerID="5ce8e26427e63d7007809d45640c78adc3775dbaf98d596992330a7b86bf527b" Nov 28 17:28:03 crc kubenswrapper[5024]: E1128 17:28:03.748358 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ce8e26427e63d7007809d45640c78adc3775dbaf98d596992330a7b86bf527b\": container with ID starting with 5ce8e26427e63d7007809d45640c78adc3775dbaf98d596992330a7b86bf527b not found: ID does not exist" containerID="5ce8e26427e63d7007809d45640c78adc3775dbaf98d596992330a7b86bf527b" Nov 28 17:28:03 crc kubenswrapper[5024]: I1128 17:28:03.748403 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce8e26427e63d7007809d45640c78adc3775dbaf98d596992330a7b86bf527b"} err="failed to get container status \"5ce8e26427e63d7007809d45640c78adc3775dbaf98d596992330a7b86bf527b\": rpc error: code = NotFound desc = could not find container \"5ce8e26427e63d7007809d45640c78adc3775dbaf98d596992330a7b86bf527b\": container with ID starting with 5ce8e26427e63d7007809d45640c78adc3775dbaf98d596992330a7b86bf527b not found: ID does not exist" Nov 28 17:28:03 crc kubenswrapper[5024]: I1128 17:28:03.927902 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" event={"ID":"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1","Type":"ContainerStarted","Data":"6f317ccafe722695a82990a4cc9dff934190f3a3985c2012460d1e9b216fb6eb"} Nov 28 17:28:04 crc kubenswrapper[5024]: I1128 17:28:04.569837 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" path="/var/lib/kubelet/pods/7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6/volumes" Nov 28 17:28:04 crc kubenswrapper[5024]: I1128 17:28:04.941733 5024 generic.go:334] "Generic (PLEG): container finished" podID="b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1" containerID="a16ff9af1942e86145d283d388ea3502c2c8dba568c7f7cd9f5d696b012f3bfc" exitCode=0 Nov 28 17:28:04 crc kubenswrapper[5024]: I1128 17:28:04.941815 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" event={"ID":"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1","Type":"ContainerDied","Data":"a16ff9af1942e86145d283d388ea3502c2c8dba568c7f7cd9f5d696b012f3bfc"} Nov 28 17:28:04 crc kubenswrapper[5024]: I1128 17:28:04.946297 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hbbrk" event={"ID":"3a7fb5de-075a-4c27-a648-e6762bd7c941","Type":"ContainerStarted","Data":"19084581dd9169bc80f9009d33cbd82ae5796b397dfbc75cc95259c9f80a5a6c"} Nov 28 17:28:04 crc kubenswrapper[5024]: I1128 17:28:04.988729 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-hbbrk" podStartSLOduration=1.522285708 podStartE2EDuration="39.988672906s" podCreationTimestamp="2025-11-28 17:27:25 +0000 UTC" firstStartedPulling="2025-11-28 17:27:26.102952891 +0000 UTC m=+1748.151873796" lastFinishedPulling="2025-11-28 17:28:04.569340089 +0000 UTC m=+1786.618260994" observedRunningTime="2025-11-28 17:28:04.978884672 +0000 UTC m=+1787.027805587" watchObservedRunningTime="2025-11-28 17:28:04.988672906 +0000 UTC m=+1787.037593831" Nov 28 17:28:05 crc kubenswrapper[5024]: I1128 17:28:05.961722 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" event={"ID":"b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1","Type":"ContainerStarted","Data":"56170c4dd37be99da8d3523f45c8e62004ab076b3b964428a7b2c4aba3b9c28c"} Nov 28 17:28:05 crc kubenswrapper[5024]: I1128 17:28:05.962002 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:05 crc kubenswrapper[5024]: I1128 17:28:05.995677 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" podStartSLOduration=3.995650603 podStartE2EDuration="3.995650603s" podCreationTimestamp="2025-11-28 17:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:28:05.99174688 +0000 UTC m=+1788.040667775" watchObservedRunningTime="2025-11-28 17:28:05.995650603 +0000 UTC m=+1788.044571508" Nov 28 17:28:07 crc kubenswrapper[5024]: I1128 17:28:07.984767 5024 generic.go:334] "Generic (PLEG): container finished" podID="3a7fb5de-075a-4c27-a648-e6762bd7c941" containerID="19084581dd9169bc80f9009d33cbd82ae5796b397dfbc75cc95259c9f80a5a6c" exitCode=0 Nov 28 17:28:07 crc kubenswrapper[5024]: I1128 17:28:07.984831 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hbbrk" event={"ID":"3a7fb5de-075a-4c27-a648-e6762bd7c941","Type":"ContainerDied","Data":"19084581dd9169bc80f9009d33cbd82ae5796b397dfbc75cc95259c9f80a5a6c"} Nov 28 17:28:09 crc kubenswrapper[5024]: I1128 17:28:09.435653 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hbbrk" Nov 28 17:28:09 crc kubenswrapper[5024]: I1128 17:28:09.577042 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hlb8\" (UniqueName: \"kubernetes.io/projected/3a7fb5de-075a-4c27-a648-e6762bd7c941-kube-api-access-6hlb8\") pod \"3a7fb5de-075a-4c27-a648-e6762bd7c941\" (UID: \"3a7fb5de-075a-4c27-a648-e6762bd7c941\") " Nov 28 17:28:09 crc kubenswrapper[5024]: I1128 17:28:09.577471 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a7fb5de-075a-4c27-a648-e6762bd7c941-config-data\") pod \"3a7fb5de-075a-4c27-a648-e6762bd7c941\" (UID: \"3a7fb5de-075a-4c27-a648-e6762bd7c941\") " Nov 28 17:28:09 crc kubenswrapper[5024]: I1128 17:28:09.577671 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7fb5de-075a-4c27-a648-e6762bd7c941-combined-ca-bundle\") pod \"3a7fb5de-075a-4c27-a648-e6762bd7c941\" (UID: \"3a7fb5de-075a-4c27-a648-e6762bd7c941\") " Nov 28 17:28:09 crc kubenswrapper[5024]: I1128 17:28:09.583319 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a7fb5de-075a-4c27-a648-e6762bd7c941-kube-api-access-6hlb8" (OuterVolumeSpecName: "kube-api-access-6hlb8") pod "3a7fb5de-075a-4c27-a648-e6762bd7c941" (UID: "3a7fb5de-075a-4c27-a648-e6762bd7c941"). InnerVolumeSpecName "kube-api-access-6hlb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:28:09 crc kubenswrapper[5024]: I1128 17:28:09.610984 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7fb5de-075a-4c27-a648-e6762bd7c941-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a7fb5de-075a-4c27-a648-e6762bd7c941" (UID: "3a7fb5de-075a-4c27-a648-e6762bd7c941"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:09 crc kubenswrapper[5024]: I1128 17:28:09.681033 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a7fb5de-075a-4c27-a648-e6762bd7c941-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:09 crc kubenswrapper[5024]: I1128 17:28:09.681070 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hlb8\" (UniqueName: \"kubernetes.io/projected/3a7fb5de-075a-4c27-a648-e6762bd7c941-kube-api-access-6hlb8\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:09 crc kubenswrapper[5024]: I1128 17:28:09.700826 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a7fb5de-075a-4c27-a648-e6762bd7c941-config-data" (OuterVolumeSpecName: "config-data") pod "3a7fb5de-075a-4c27-a648-e6762bd7c941" (UID: "3a7fb5de-075a-4c27-a648-e6762bd7c941"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:09 crc kubenswrapper[5024]: I1128 17:28:09.783189 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a7fb5de-075a-4c27-a648-e6762bd7c941-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:10 crc kubenswrapper[5024]: I1128 17:28:10.011516 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hbbrk" event={"ID":"3a7fb5de-075a-4c27-a648-e6762bd7c941","Type":"ContainerDied","Data":"d439e7d6cf3e2820cb998b0d0b0de34c348d0a6df079fc57e30b2e3ad858c6e9"} Nov 28 17:28:10 crc kubenswrapper[5024]: I1128 17:28:10.011555 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hbbrk" Nov 28 17:28:10 crc kubenswrapper[5024]: I1128 17:28:10.011571 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d439e7d6cf3e2820cb998b0d0b0de34c348d0a6df079fc57e30b2e3ad858c6e9" Nov 28 17:28:10 crc kubenswrapper[5024]: I1128 17:28:10.988492 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5c7c65bb6d-4vg66"] Nov 28 17:28:10 crc kubenswrapper[5024]: E1128 17:28:10.990215 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" containerName="dnsmasq-dns" Nov 28 17:28:10 crc kubenswrapper[5024]: I1128 17:28:10.990244 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" containerName="dnsmasq-dns" Nov 28 17:28:10 crc kubenswrapper[5024]: E1128 17:28:10.990320 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" containerName="init" Nov 28 17:28:10 crc kubenswrapper[5024]: I1128 17:28:10.990332 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" containerName="init" Nov 28 17:28:10 crc kubenswrapper[5024]: E1128 17:28:10.990402 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a7fb5de-075a-4c27-a648-e6762bd7c941" containerName="heat-db-sync" Nov 28 17:28:10 crc kubenswrapper[5024]: I1128 17:28:10.990415 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a7fb5de-075a-4c27-a648-e6762bd7c941" containerName="heat-db-sync" Nov 28 17:28:10 crc kubenswrapper[5024]: I1128 17:28:10.991581 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f7dddfb-d93a-4b7f-b9a1-0ae52bba47d6" containerName="dnsmasq-dns" Nov 28 17:28:10 crc kubenswrapper[5024]: I1128 17:28:10.991649 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a7fb5de-075a-4c27-a648-e6762bd7c941" containerName="heat-db-sync" Nov 28 17:28:10 crc kubenswrapper[5024]: I1128 17:28:10.993439 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.044818 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ddba23c-0074-409a-b5c1-fd147c402317-config-data\") pod \"heat-engine-5c7c65bb6d-4vg66\" (UID: \"8ddba23c-0074-409a-b5c1-fd147c402317\") " pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.045079 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ddba23c-0074-409a-b5c1-fd147c402317-config-data-custom\") pod \"heat-engine-5c7c65bb6d-4vg66\" (UID: \"8ddba23c-0074-409a-b5c1-fd147c402317\") " pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.045124 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q88n2\" (UniqueName: \"kubernetes.io/projected/8ddba23c-0074-409a-b5c1-fd147c402317-kube-api-access-q88n2\") pod \"heat-engine-5c7c65bb6d-4vg66\" (UID: \"8ddba23c-0074-409a-b5c1-fd147c402317\") " pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.045768 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ddba23c-0074-409a-b5c1-fd147c402317-combined-ca-bundle\") pod \"heat-engine-5c7c65bb6d-4vg66\" (UID: \"8ddba23c-0074-409a-b5c1-fd147c402317\") " pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.093999 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5c7c65bb6d-4vg66"] Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.148092 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7c7f65cbb-fvsgt"] Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.149721 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ddba23c-0074-409a-b5c1-fd147c402317-combined-ca-bundle\") pod \"heat-engine-5c7c65bb6d-4vg66\" (UID: \"8ddba23c-0074-409a-b5c1-fd147c402317\") " pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.149886 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ddba23c-0074-409a-b5c1-fd147c402317-config-data\") pod \"heat-engine-5c7c65bb6d-4vg66\" (UID: \"8ddba23c-0074-409a-b5c1-fd147c402317\") " pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.149926 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.149943 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ddba23c-0074-409a-b5c1-fd147c402317-config-data-custom\") pod \"heat-engine-5c7c65bb6d-4vg66\" (UID: \"8ddba23c-0074-409a-b5c1-fd147c402317\") " pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.149965 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q88n2\" (UniqueName: \"kubernetes.io/projected/8ddba23c-0074-409a-b5c1-fd147c402317-kube-api-access-q88n2\") pod \"heat-engine-5c7c65bb6d-4vg66\" (UID: \"8ddba23c-0074-409a-b5c1-fd147c402317\") " pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.164982 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ddba23c-0074-409a-b5c1-fd147c402317-config-data-custom\") pod \"heat-engine-5c7c65bb6d-4vg66\" (UID: \"8ddba23c-0074-409a-b5c1-fd147c402317\") " pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.166867 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ddba23c-0074-409a-b5c1-fd147c402317-config-data\") pod \"heat-engine-5c7c65bb6d-4vg66\" (UID: \"8ddba23c-0074-409a-b5c1-fd147c402317\") " pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.173812 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ddba23c-0074-409a-b5c1-fd147c402317-combined-ca-bundle\") pod \"heat-engine-5c7c65bb6d-4vg66\" (UID: \"8ddba23c-0074-409a-b5c1-fd147c402317\") " pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.190334 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7c7f65cbb-fvsgt"] Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.191785 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q88n2\" (UniqueName: \"kubernetes.io/projected/8ddba23c-0074-409a-b5c1-fd147c402317-kube-api-access-q88n2\") pod \"heat-engine-5c7c65bb6d-4vg66\" (UID: \"8ddba23c-0074-409a-b5c1-fd147c402317\") " pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.215092 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-bc8bb8756-2wm58"] Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.217162 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.245497 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-bc8bb8756-2wm58"] Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.252703 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-combined-ca-bundle\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.253043 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhbzz\" (UniqueName: \"kubernetes.io/projected/4f741e4f-1722-4cea-9fdf-2f93fd734983-kube-api-access-xhbzz\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.253321 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-config-data\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.253393 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-public-tls-certs\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.253482 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-internal-tls-certs\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.253574 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-config-data-custom\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.321247 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.356955 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-internal-tls-certs\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.357062 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-config-data-custom\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.357128 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzp2r\" (UniqueName: \"kubernetes.io/projected/39ee04ed-749f-4912-ae06-7feea922da25-kube-api-access-qzp2r\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.357166 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-config-data\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.357207 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-config-data-custom\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.357240 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-combined-ca-bundle\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.357314 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-internal-tls-certs\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.357781 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-public-tls-certs\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.358059 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhbzz\" (UniqueName: \"kubernetes.io/projected/4f741e4f-1722-4cea-9fdf-2f93fd734983-kube-api-access-xhbzz\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.358404 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-combined-ca-bundle\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.358499 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-config-data\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.358535 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-public-tls-certs\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.361787 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-internal-tls-certs\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.364611 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-combined-ca-bundle\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.365200 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-public-tls-certs\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.368115 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-config-data\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.373445 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f741e4f-1722-4cea-9fdf-2f93fd734983-config-data-custom\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.376792 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhbzz\" (UniqueName: \"kubernetes.io/projected/4f741e4f-1722-4cea-9fdf-2f93fd734983-kube-api-access-xhbzz\") pod \"heat-api-7c7f65cbb-fvsgt\" (UID: \"4f741e4f-1722-4cea-9fdf-2f93fd734983\") " pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.464457 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-internal-tls-certs\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.464789 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-public-tls-certs\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.466122 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-combined-ca-bundle\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.466376 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzp2r\" (UniqueName: \"kubernetes.io/projected/39ee04ed-749f-4912-ae06-7feea922da25-kube-api-access-qzp2r\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.466416 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-config-data\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.466480 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-config-data-custom\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.469677 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-public-tls-certs\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.474110 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-internal-tls-certs\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.474851 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-config-data-custom\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.475874 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-combined-ca-bundle\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.478182 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39ee04ed-749f-4912-ae06-7feea922da25-config-data\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.489828 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzp2r\" (UniqueName: \"kubernetes.io/projected/39ee04ed-749f-4912-ae06-7feea922da25-kube-api-access-qzp2r\") pod \"heat-cfnapi-bc8bb8756-2wm58\" (UID: \"39ee04ed-749f-4912-ae06-7feea922da25\") " pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.498156 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:28:11 crc kubenswrapper[5024]: E1128 17:28:11.498556 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.597890 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.601937 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:11 crc kubenswrapper[5024]: I1128 17:28:11.814996 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5c7c65bb6d-4vg66"] Nov 28 17:28:12 crc kubenswrapper[5024]: I1128 17:28:12.092153 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5c7c65bb6d-4vg66" event={"ID":"8ddba23c-0074-409a-b5c1-fd147c402317","Type":"ContainerStarted","Data":"f44d53530258807720c4158b2bccb00a569cf5fdc26fb73e8cc970341858ee89"} Nov 28 17:28:12 crc kubenswrapper[5024]: I1128 17:28:12.184494 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7c7f65cbb-fvsgt"] Nov 28 17:28:12 crc kubenswrapper[5024]: W1128 17:28:12.205467 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39ee04ed_749f_4912_ae06_7feea922da25.slice/crio-5177f11c94f22d59ad15dcf6e8c0fb557aa28064c4066e5fb8d6cb9a378ced94 WatchSource:0}: Error finding container 5177f11c94f22d59ad15dcf6e8c0fb557aa28064c4066e5fb8d6cb9a378ced94: Status 404 returned error can't find the container with id 5177f11c94f22d59ad15dcf6e8c0fb557aa28064c4066e5fb8d6cb9a378ced94 Nov 28 17:28:12 crc kubenswrapper[5024]: I1128 17:28:12.209339 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-bc8bb8756-2wm58"] Nov 28 17:28:12 crc kubenswrapper[5024]: I1128 17:28:12.496935 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-gbngz" Nov 28 17:28:12 crc kubenswrapper[5024]: I1128 17:28:12.659834 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-tjcpt"] Nov 28 17:28:12 crc kubenswrapper[5024]: I1128 17:28:12.660461 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" podUID="55923d04-26e1-4f09-a64b-45c188ca346a" containerName="dnsmasq-dns" containerID="cri-o://65d579d5dc0f8ae31c0b48a29aab242d1a8424cd0a57365ca0022ecfed475750" gracePeriod=10 Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.161547 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c7f65cbb-fvsgt" event={"ID":"4f741e4f-1722-4cea-9fdf-2f93fd734983","Type":"ContainerStarted","Data":"0b68b28c784684a1c4ba4ac6016d9c3a0eb7e404e759355314d04182a3b763b5"} Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.165957 5024 generic.go:334] "Generic (PLEG): container finished" podID="55923d04-26e1-4f09-a64b-45c188ca346a" containerID="65d579d5dc0f8ae31c0b48a29aab242d1a8424cd0a57365ca0022ecfed475750" exitCode=0 Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.166058 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" event={"ID":"55923d04-26e1-4f09-a64b-45c188ca346a","Type":"ContainerDied","Data":"65d579d5dc0f8ae31c0b48a29aab242d1a8424cd0a57365ca0022ecfed475750"} Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.171921 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5c7c65bb6d-4vg66" event={"ID":"8ddba23c-0074-409a-b5c1-fd147c402317","Type":"ContainerStarted","Data":"9109c3fd562f0d62e8a9ef4973a72f600416357c0ebf9087c27b037b98668866"} Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.172082 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.178406 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-bc8bb8756-2wm58" event={"ID":"39ee04ed-749f-4912-ae06-7feea922da25","Type":"ContainerStarted","Data":"5177f11c94f22d59ad15dcf6e8c0fb557aa28064c4066e5fb8d6cb9a378ced94"} Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.190792 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5c7c65bb6d-4vg66" podStartSLOduration=3.190767808 podStartE2EDuration="3.190767808s" podCreationTimestamp="2025-11-28 17:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:28:13.190163351 +0000 UTC m=+1795.239084256" watchObservedRunningTime="2025-11-28 17:28:13.190767808 +0000 UTC m=+1795.239688713" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.307761 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.422929 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-ovsdbserver-sb\") pod \"55923d04-26e1-4f09-a64b-45c188ca346a\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.423080 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-ovsdbserver-nb\") pod \"55923d04-26e1-4f09-a64b-45c188ca346a\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.423106 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-dns-swift-storage-0\") pod \"55923d04-26e1-4f09-a64b-45c188ca346a\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.423173 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-openstack-edpm-ipam\") pod \"55923d04-26e1-4f09-a64b-45c188ca346a\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.423296 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t76zk\" (UniqueName: \"kubernetes.io/projected/55923d04-26e1-4f09-a64b-45c188ca346a-kube-api-access-t76zk\") pod \"55923d04-26e1-4f09-a64b-45c188ca346a\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.423372 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-config\") pod \"55923d04-26e1-4f09-a64b-45c188ca346a\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.423410 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-dns-svc\") pod \"55923d04-26e1-4f09-a64b-45c188ca346a\" (UID: \"55923d04-26e1-4f09-a64b-45c188ca346a\") " Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.440085 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55923d04-26e1-4f09-a64b-45c188ca346a-kube-api-access-t76zk" (OuterVolumeSpecName: "kube-api-access-t76zk") pod "55923d04-26e1-4f09-a64b-45c188ca346a" (UID: "55923d04-26e1-4f09-a64b-45c188ca346a"). InnerVolumeSpecName "kube-api-access-t76zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.482866 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "55923d04-26e1-4f09-a64b-45c188ca346a" (UID: "55923d04-26e1-4f09-a64b-45c188ca346a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.493251 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "55923d04-26e1-4f09-a64b-45c188ca346a" (UID: "55923d04-26e1-4f09-a64b-45c188ca346a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.497224 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-config" (OuterVolumeSpecName: "config") pod "55923d04-26e1-4f09-a64b-45c188ca346a" (UID: "55923d04-26e1-4f09-a64b-45c188ca346a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.498405 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55923d04-26e1-4f09-a64b-45c188ca346a" (UID: "55923d04-26e1-4f09-a64b-45c188ca346a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.501817 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "55923d04-26e1-4f09-a64b-45c188ca346a" (UID: "55923d04-26e1-4f09-a64b-45c188ca346a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.513478 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.530454 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.530490 5024 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.530506 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.530519 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t76zk\" (UniqueName: \"kubernetes.io/projected/55923d04-26e1-4f09-a64b-45c188ca346a-kube-api-access-t76zk\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.530533 5024 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.530543 5024 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.569262 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "55923d04-26e1-4f09-a64b-45c188ca346a" (UID: "55923d04-26e1-4f09-a64b-45c188ca346a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:28:13 crc kubenswrapper[5024]: I1128 17:28:13.633462 5024 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55923d04-26e1-4f09-a64b-45c188ca346a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:14 crc kubenswrapper[5024]: I1128 17:28:14.209332 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" Nov 28 17:28:14 crc kubenswrapper[5024]: I1128 17:28:14.210160 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-tjcpt" event={"ID":"55923d04-26e1-4f09-a64b-45c188ca346a","Type":"ContainerDied","Data":"8e53e6aedd2fe1337b6973911d5bfa26f3ec95694af2c11371a69979bcc8cbdb"} Nov 28 17:28:14 crc kubenswrapper[5024]: I1128 17:28:14.210211 5024 scope.go:117] "RemoveContainer" containerID="65d579d5dc0f8ae31c0b48a29aab242d1a8424cd0a57365ca0022ecfed475750" Nov 28 17:28:14 crc kubenswrapper[5024]: I1128 17:28:14.249802 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-tjcpt"] Nov 28 17:28:14 crc kubenswrapper[5024]: I1128 17:28:14.262076 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-tjcpt"] Nov 28 17:28:14 crc kubenswrapper[5024]: I1128 17:28:14.265165 5024 scope.go:117] "RemoveContainer" containerID="1e45b2c0aa399e77eb6353b4bcc4a3dbdcb25c9796b2a2aaff7596926729a233" Nov 28 17:28:14 crc kubenswrapper[5024]: I1128 17:28:14.511570 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55923d04-26e1-4f09-a64b-45c188ca346a" path="/var/lib/kubelet/pods/55923d04-26e1-4f09-a64b-45c188ca346a/volumes" Nov 28 17:28:15 crc kubenswrapper[5024]: I1128 17:28:15.223425 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-bc8bb8756-2wm58" event={"ID":"39ee04ed-749f-4912-ae06-7feea922da25","Type":"ContainerStarted","Data":"b32cba1d1a203bdb47363e4fc668847d3a1ffcdd696c56377d4d9fc3be7eacf2"} Nov 28 17:28:15 crc kubenswrapper[5024]: I1128 17:28:15.223914 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:15 crc kubenswrapper[5024]: I1128 17:28:15.227440 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"446db982-05e3-4131-aaf7-07e42b726565","Type":"ContainerStarted","Data":"0eb11fe46e36c14e07bbe91108104aa90e4270c16c7ad95b78ee48307b75cef0"} Nov 28 17:28:15 crc kubenswrapper[5024]: I1128 17:28:15.229584 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c7f65cbb-fvsgt" event={"ID":"4f741e4f-1722-4cea-9fdf-2f93fd734983","Type":"ContainerStarted","Data":"9093c3df508961985babafb6951e2cc173d2e865664991ff7d7640abda2661be"} Nov 28 17:28:15 crc kubenswrapper[5024]: I1128 17:28:15.229770 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:15 crc kubenswrapper[5024]: I1128 17:28:15.247539 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-bc8bb8756-2wm58" podStartSLOduration=2.188822922 podStartE2EDuration="4.247518545s" podCreationTimestamp="2025-11-28 17:28:11 +0000 UTC" firstStartedPulling="2025-11-28 17:28:12.207835309 +0000 UTC m=+1794.256756214" lastFinishedPulling="2025-11-28 17:28:14.266530932 +0000 UTC m=+1796.315451837" observedRunningTime="2025-11-28 17:28:15.244207999 +0000 UTC m=+1797.293128904" watchObservedRunningTime="2025-11-28 17:28:15.247518545 +0000 UTC m=+1797.296439450" Nov 28 17:28:15 crc kubenswrapper[5024]: I1128 17:28:15.280012 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.464449589 podStartE2EDuration="45.279987137s" podCreationTimestamp="2025-11-28 17:27:30 +0000 UTC" firstStartedPulling="2025-11-28 17:27:31.450724186 +0000 UTC m=+1753.499645091" lastFinishedPulling="2025-11-28 17:28:14.266261724 +0000 UTC m=+1796.315182639" observedRunningTime="2025-11-28 17:28:15.274869829 +0000 UTC m=+1797.323790744" watchObservedRunningTime="2025-11-28 17:28:15.279987137 +0000 UTC m=+1797.328908042" Nov 28 17:28:15 crc kubenswrapper[5024]: I1128 17:28:15.313164 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7c7f65cbb-fvsgt" podStartSLOduration=2.223926711 podStartE2EDuration="4.313136409s" podCreationTimestamp="2025-11-28 17:28:11 +0000 UTC" firstStartedPulling="2025-11-28 17:28:12.176772768 +0000 UTC m=+1794.225693663" lastFinishedPulling="2025-11-28 17:28:14.265982456 +0000 UTC m=+1796.314903361" observedRunningTime="2025-11-28 17:28:15.295106866 +0000 UTC m=+1797.344027771" watchObservedRunningTime="2025-11-28 17:28:15.313136409 +0000 UTC m=+1797.362057324" Nov 28 17:28:21 crc kubenswrapper[5024]: I1128 17:28:21.895594 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt"] Nov 28 17:28:21 crc kubenswrapper[5024]: E1128 17:28:21.896683 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55923d04-26e1-4f09-a64b-45c188ca346a" containerName="init" Nov 28 17:28:21 crc kubenswrapper[5024]: I1128 17:28:21.896697 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="55923d04-26e1-4f09-a64b-45c188ca346a" containerName="init" Nov 28 17:28:21 crc kubenswrapper[5024]: E1128 17:28:21.896712 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55923d04-26e1-4f09-a64b-45c188ca346a" containerName="dnsmasq-dns" Nov 28 17:28:21 crc kubenswrapper[5024]: I1128 17:28:21.896719 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="55923d04-26e1-4f09-a64b-45c188ca346a" containerName="dnsmasq-dns" Nov 28 17:28:21 crc kubenswrapper[5024]: I1128 17:28:21.896952 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="55923d04-26e1-4f09-a64b-45c188ca346a" containerName="dnsmasq-dns" Nov 28 17:28:21 crc kubenswrapper[5024]: I1128 17:28:21.897796 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:21 crc kubenswrapper[5024]: I1128 17:28:21.902195 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:28:21 crc kubenswrapper[5024]: I1128 17:28:21.902340 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:28:21 crc kubenswrapper[5024]: I1128 17:28:21.903626 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:28:21 crc kubenswrapper[5024]: I1128 17:28:21.903799 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:28:21 crc kubenswrapper[5024]: I1128 17:28:21.923688 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt"] Nov 28 17:28:22 crc kubenswrapper[5024]: I1128 17:28:22.043193 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:22 crc kubenswrapper[5024]: I1128 17:28:22.043393 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:22 crc kubenswrapper[5024]: I1128 17:28:22.043461 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:22 crc kubenswrapper[5024]: I1128 17:28:22.043508 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q75p8\" (UniqueName: \"kubernetes.io/projected/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-kube-api-access-q75p8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:22 crc kubenswrapper[5024]: I1128 17:28:22.145514 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:22 crc kubenswrapper[5024]: I1128 17:28:22.145618 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:22 crc kubenswrapper[5024]: I1128 17:28:22.145652 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q75p8\" (UniqueName: \"kubernetes.io/projected/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-kube-api-access-q75p8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:22 crc kubenswrapper[5024]: I1128 17:28:22.145823 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:22 crc kubenswrapper[5024]: I1128 17:28:22.152466 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:22 crc kubenswrapper[5024]: I1128 17:28:22.152619 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:22 crc kubenswrapper[5024]: I1128 17:28:22.160088 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:22 crc kubenswrapper[5024]: I1128 17:28:22.170503 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q75p8\" (UniqueName: \"kubernetes.io/projected/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-kube-api-access-q75p8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:22 crc kubenswrapper[5024]: I1128 17:28:22.229892 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:22 crc kubenswrapper[5024]: I1128 17:28:22.974695 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt"] Nov 28 17:28:22 crc kubenswrapper[5024]: W1128 17:28:22.981696 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6f96dc0_0ac5_4a4a_a888_870195dca5d0.slice/crio-1495eafe33d330f66a71be1b2149db1bbf36e390955824663e797501b9b3796d WatchSource:0}: Error finding container 1495eafe33d330f66a71be1b2149db1bbf36e390955824663e797501b9b3796d: Status 404 returned error can't find the container with id 1495eafe33d330f66a71be1b2149db1bbf36e390955824663e797501b9b3796d Nov 28 17:28:23 crc kubenswrapper[5024]: I1128 17:28:23.225469 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-bc8bb8756-2wm58" Nov 28 17:28:23 crc kubenswrapper[5024]: I1128 17:28:23.296631 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7c7f65cbb-fvsgt" Nov 28 17:28:23 crc kubenswrapper[5024]: I1128 17:28:23.298416 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6cdbcf9767-2dvsc"] Nov 28 17:28:23 crc kubenswrapper[5024]: I1128 17:28:23.298674 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" podUID="d4319898-7040-4c0c-b5eb-d2eabe093afb" containerName="heat-cfnapi" containerID="cri-o://38d5544cc8a3c600f7dcfd6be11af667c4faf9d1b79a85233c540966e7b0819a" gracePeriod=60 Nov 28 17:28:23 crc kubenswrapper[5024]: I1128 17:28:23.381600 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" event={"ID":"a6f96dc0-0ac5-4a4a-a888-870195dca5d0","Type":"ContainerStarted","Data":"1495eafe33d330f66a71be1b2149db1bbf36e390955824663e797501b9b3796d"} Nov 28 17:28:23 crc kubenswrapper[5024]: I1128 17:28:23.397438 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6cb795f57c-7826b"] Nov 28 17:28:23 crc kubenswrapper[5024]: I1128 17:28:23.397649 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6cb795f57c-7826b" podUID="eda77528-b4c2-4529-9293-f5bf3c7aeb5a" containerName="heat-api" containerID="cri-o://7abc8dbbc9f6e002ea8839c28dbfa5350914a54bd42034d95b2eb3a409f662dd" gracePeriod=60 Nov 28 17:28:23 crc kubenswrapper[5024]: I1128 17:28:23.498347 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:28:23 crc kubenswrapper[5024]: E1128 17:28:23.498675 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:28:26 crc kubenswrapper[5024]: I1128 17:28:26.504567 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" podUID="d4319898-7040-4c0c-b5eb-d2eabe093afb" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.221:8000/healthcheck\": read tcp 10.217.0.2:59228->10.217.0.221:8000: read: connection reset by peer" Nov 28 17:28:26 crc kubenswrapper[5024]: I1128 17:28:26.563103 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6cb795f57c-7826b" podUID="eda77528-b4c2-4529-9293-f5bf3c7aeb5a" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.220:8004/healthcheck\": read tcp 10.217.0.2:40404->10.217.0.220:8004: read: connection reset by peer" Nov 28 17:28:27 crc kubenswrapper[5024]: I1128 17:28:27.473125 5024 generic.go:334] "Generic (PLEG): container finished" podID="d4319898-7040-4c0c-b5eb-d2eabe093afb" containerID="38d5544cc8a3c600f7dcfd6be11af667c4faf9d1b79a85233c540966e7b0819a" exitCode=0 Nov 28 17:28:27 crc kubenswrapper[5024]: I1128 17:28:27.473205 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" event={"ID":"d4319898-7040-4c0c-b5eb-d2eabe093afb","Type":"ContainerDied","Data":"38d5544cc8a3c600f7dcfd6be11af667c4faf9d1b79a85233c540966e7b0819a"} Nov 28 17:28:27 crc kubenswrapper[5024]: I1128 17:28:27.476822 5024 generic.go:334] "Generic (PLEG): container finished" podID="eda77528-b4c2-4529-9293-f5bf3c7aeb5a" containerID="7abc8dbbc9f6e002ea8839c28dbfa5350914a54bd42034d95b2eb3a409f662dd" exitCode=0 Nov 28 17:28:27 crc kubenswrapper[5024]: I1128 17:28:27.476851 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cb795f57c-7826b" event={"ID":"eda77528-b4c2-4529-9293-f5bf3c7aeb5a","Type":"ContainerDied","Data":"7abc8dbbc9f6e002ea8839c28dbfa5350914a54bd42034d95b2eb3a409f662dd"} Nov 28 17:28:28 crc kubenswrapper[5024]: I1128 17:28:28.490358 5024 generic.go:334] "Generic (PLEG): container finished" podID="81a9271f-4842-4922-a19f-11de21871c68" containerID="df5fda9c277f040a69d49b71b690926cf7faca65e175dcb2595cebd7f649c4e6" exitCode=0 Nov 28 17:28:28 crc kubenswrapper[5024]: I1128 17:28:28.490441 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"81a9271f-4842-4922-a19f-11de21871c68","Type":"ContainerDied","Data":"df5fda9c277f040a69d49b71b690926cf7faca65e175dcb2595cebd7f649c4e6"} Nov 28 17:28:29 crc kubenswrapper[5024]: I1128 17:28:29.504357 5024 generic.go:334] "Generic (PLEG): container finished" podID="0fae95bc-19b8-4274-ab02-cc6ebf195fe7" containerID="8f311492a5f1dff7df1125ba759ce9d5f82275f573e08ec2d613f32974f38bf3" exitCode=0 Nov 28 17:28:29 crc kubenswrapper[5024]: I1128 17:28:29.504431 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0fae95bc-19b8-4274-ab02-cc6ebf195fe7","Type":"ContainerDied","Data":"8f311492a5f1dff7df1125ba759ce9d5f82275f573e08ec2d613f32974f38bf3"} Nov 28 17:28:29 crc kubenswrapper[5024]: I1128 17:28:29.595262 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6cb795f57c-7826b" podUID="eda77528-b4c2-4529-9293-f5bf3c7aeb5a" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.220:8004/healthcheck\": dial tcp 10.217.0.220:8004: connect: connection refused" Nov 28 17:28:29 crc kubenswrapper[5024]: I1128 17:28:29.736188 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" podUID="d4319898-7040-4c0c-b5eb-d2eabe093afb" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.221:8000/healthcheck\": dial tcp 10.217.0.221:8000: connect: connection refused" Nov 28 17:28:31 crc kubenswrapper[5024]: I1128 17:28:31.363673 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5c7c65bb6d-4vg66" Nov 28 17:28:31 crc kubenswrapper[5024]: I1128 17:28:31.412452 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-586d869b9-5wnvb"] Nov 28 17:28:31 crc kubenswrapper[5024]: I1128 17:28:31.412685 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-586d869b9-5wnvb" podUID="58bfac75-cfac-4404-b44b-1ca7b1a94442" containerName="heat-engine" containerID="cri-o://15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76" gracePeriod=60 Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.259463 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.372008 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-config-data\") pod \"d4319898-7040-4c0c-b5eb-d2eabe093afb\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.372320 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-config-data-custom\") pod \"d4319898-7040-4c0c-b5eb-d2eabe093afb\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.372439 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-combined-ca-bundle\") pod \"d4319898-7040-4c0c-b5eb-d2eabe093afb\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.372526 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-internal-tls-certs\") pod \"d4319898-7040-4c0c-b5eb-d2eabe093afb\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.372676 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-public-tls-certs\") pod \"d4319898-7040-4c0c-b5eb-d2eabe093afb\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.372773 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c2s8\" (UniqueName: \"kubernetes.io/projected/d4319898-7040-4c0c-b5eb-d2eabe093afb-kube-api-access-2c2s8\") pod \"d4319898-7040-4c0c-b5eb-d2eabe093afb\" (UID: \"d4319898-7040-4c0c-b5eb-d2eabe093afb\") " Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.392612 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.437442 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d4319898-7040-4c0c-b5eb-d2eabe093afb" (UID: "d4319898-7040-4c0c-b5eb-d2eabe093afb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.439540 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4319898-7040-4c0c-b5eb-d2eabe093afb-kube-api-access-2c2s8" (OuterVolumeSpecName: "kube-api-access-2c2s8") pod "d4319898-7040-4c0c-b5eb-d2eabe093afb" (UID: "d4319898-7040-4c0c-b5eb-d2eabe093afb"). InnerVolumeSpecName "kube-api-access-2c2s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.474978 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-config-data\") pod \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.475078 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-combined-ca-bundle\") pod \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.475221 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gffkv\" (UniqueName: \"kubernetes.io/projected/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-kube-api-access-gffkv\") pod \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.475299 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-internal-tls-certs\") pod \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.475349 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-public-tls-certs\") pod \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.475468 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-config-data-custom\") pod \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\" (UID: \"eda77528-b4c2-4529-9293-f5bf3c7aeb5a\") " Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.476061 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.476076 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c2s8\" (UniqueName: \"kubernetes.io/projected/d4319898-7040-4c0c-b5eb-d2eabe093afb-kube-api-access-2c2s8\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.481315 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-kube-api-access-gffkv" (OuterVolumeSpecName: "kube-api-access-gffkv") pod "eda77528-b4c2-4529-9293-f5bf3c7aeb5a" (UID: "eda77528-b4c2-4529-9293-f5bf3c7aeb5a"). InnerVolumeSpecName "kube-api-access-gffkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.482119 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eda77528-b4c2-4529-9293-f5bf3c7aeb5a" (UID: "eda77528-b4c2-4529-9293-f5bf3c7aeb5a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.505115 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:28:34 crc kubenswrapper[5024]: E1128 17:28:34.506737 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.514201 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4319898-7040-4c0c-b5eb-d2eabe093afb" (UID: "d4319898-7040-4c0c-b5eb-d2eabe093afb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.539680 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d4319898-7040-4c0c-b5eb-d2eabe093afb" (UID: "d4319898-7040-4c0c-b5eb-d2eabe093afb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.552183 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d4319898-7040-4c0c-b5eb-d2eabe093afb" (UID: "d4319898-7040-4c0c-b5eb-d2eabe093afb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.558553 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-config-data" (OuterVolumeSpecName: "config-data") pod "d4319898-7040-4c0c-b5eb-d2eabe093afb" (UID: "d4319898-7040-4c0c-b5eb-d2eabe093afb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.575301 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "eda77528-b4c2-4529-9293-f5bf3c7aeb5a" (UID: "eda77528-b4c2-4529-9293-f5bf3c7aeb5a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.578585 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.578621 5024 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.578632 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gffkv\" (UniqueName: \"kubernetes.io/projected/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-kube-api-access-gffkv\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.578644 5024 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.578659 5024 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.578673 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4319898-7040-4c0c-b5eb-d2eabe093afb-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.578684 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.581141 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-config-data" (OuterVolumeSpecName: "config-data") pod "eda77528-b4c2-4529-9293-f5bf3c7aeb5a" (UID: "eda77528-b4c2-4529-9293-f5bf3c7aeb5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.584092 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eda77528-b4c2-4529-9293-f5bf3c7aeb5a" (UID: "eda77528-b4c2-4529-9293-f5bf3c7aeb5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.766012 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.766064 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.791874 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0fae95bc-19b8-4274-ab02-cc6ebf195fe7","Type":"ContainerStarted","Data":"d8504e9fcad1ee0212fd2957fb41888ebe23902ff4660edd148e4fe4a0a97d8c"} Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.793372 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.798470 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eda77528-b4c2-4529-9293-f5bf3c7aeb5a" (UID: "eda77528-b4c2-4529-9293-f5bf3c7aeb5a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.817267 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6cb795f57c-7826b" event={"ID":"eda77528-b4c2-4529-9293-f5bf3c7aeb5a","Type":"ContainerDied","Data":"26f9705a1ee0060f49f467e7edb9f5c17ef28420b2c1cbf0f20560c590f25f74"} Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.817329 5024 scope.go:117] "RemoveContainer" containerID="7abc8dbbc9f6e002ea8839c28dbfa5350914a54bd42034d95b2eb3a409f662dd" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.817368 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6cb795f57c-7826b" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.824140 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"81a9271f-4842-4922-a19f-11de21871c68","Type":"ContainerStarted","Data":"7b77bd054d74f603cccf45af4416299e142e022082030bf1d633569dc51f1624"} Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.824977 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.826759 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" event={"ID":"d4319898-7040-4c0c-b5eb-d2eabe093afb","Type":"ContainerDied","Data":"4a751efb5ce220d58d1727b0c975505fa27ee6bebb9e7ea15cb59c81b28af867"} Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.826822 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6cdbcf9767-2dvsc" Nov 28 17:28:34 crc kubenswrapper[5024]: I1128 17:28:34.868739 5024 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eda77528-b4c2-4529-9293-f5bf3c7aeb5a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.076874 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=48.076854871 podStartE2EDuration="48.076854871s" podCreationTimestamp="2025-11-28 17:27:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:28:35.068725795 +0000 UTC m=+1817.117646700" watchObservedRunningTime="2025-11-28 17:28:35.076854871 +0000 UTC m=+1817.125775766" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.136743 5024 scope.go:117] "RemoveContainer" containerID="38d5544cc8a3c600f7dcfd6be11af667c4faf9d1b79a85233c540966e7b0819a" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.137059 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=47.137037797 podStartE2EDuration="47.137037797s" podCreationTimestamp="2025-11-28 17:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:28:35.107091708 +0000 UTC m=+1817.156012613" watchObservedRunningTime="2025-11-28 17:28:35.137037797 +0000 UTC m=+1817.185958702" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.168173 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6cdbcf9767-2dvsc"] Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.186590 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6cdbcf9767-2dvsc"] Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.196695 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6cb795f57c-7826b"] Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.206894 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6cb795f57c-7826b"] Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.359673 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-llhcr"] Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.370269 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-llhcr"] Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.536102 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-wmm7h"] Nov 28 17:28:35 crc kubenswrapper[5024]: E1128 17:28:35.536635 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda77528-b4c2-4529-9293-f5bf3c7aeb5a" containerName="heat-api" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.536651 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda77528-b4c2-4529-9293-f5bf3c7aeb5a" containerName="heat-api" Nov 28 17:28:35 crc kubenswrapper[5024]: E1128 17:28:35.536675 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4319898-7040-4c0c-b5eb-d2eabe093afb" containerName="heat-cfnapi" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.536681 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4319898-7040-4c0c-b5eb-d2eabe093afb" containerName="heat-cfnapi" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.536923 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda77528-b4c2-4529-9293-f5bf3c7aeb5a" containerName="heat-api" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.536949 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4319898-7040-4c0c-b5eb-d2eabe093afb" containerName="heat-cfnapi" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.537762 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.546730 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.557745 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-wmm7h"] Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.669493 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-scripts\") pod \"aodh-db-sync-wmm7h\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.669917 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-combined-ca-bundle\") pod \"aodh-db-sync-wmm7h\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.670190 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82vf8\" (UniqueName: \"kubernetes.io/projected/8b59921a-033f-454b-afad-20ee4a3481e4-kube-api-access-82vf8\") pod \"aodh-db-sync-wmm7h\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.670315 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-config-data\") pod \"aodh-db-sync-wmm7h\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.773008 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-combined-ca-bundle\") pod \"aodh-db-sync-wmm7h\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.773108 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82vf8\" (UniqueName: \"kubernetes.io/projected/8b59921a-033f-454b-afad-20ee4a3481e4-kube-api-access-82vf8\") pod \"aodh-db-sync-wmm7h\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.773179 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-config-data\") pod \"aodh-db-sync-wmm7h\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.773381 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-scripts\") pod \"aodh-db-sync-wmm7h\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.779941 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-combined-ca-bundle\") pod \"aodh-db-sync-wmm7h\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.780471 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-scripts\") pod \"aodh-db-sync-wmm7h\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.782965 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-config-data\") pod \"aodh-db-sync-wmm7h\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.802759 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82vf8\" (UniqueName: \"kubernetes.io/projected/8b59921a-033f-454b-afad-20ee4a3481e4-kube-api-access-82vf8\") pod \"aodh-db-sync-wmm7h\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.865191 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" event={"ID":"a6f96dc0-0ac5-4a4a-a888-870195dca5d0","Type":"ContainerStarted","Data":"3e35e28754e95d222b0e6e327a255f3f4a15734b0f83f80d140fa6fdbc1e53ab"} Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.890742 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" podStartSLOduration=3.990317362 podStartE2EDuration="14.890723895s" podCreationTimestamp="2025-11-28 17:28:21 +0000 UTC" firstStartedPulling="2025-11-28 17:28:22.984637627 +0000 UTC m=+1805.033558542" lastFinishedPulling="2025-11-28 17:28:33.88504417 +0000 UTC m=+1815.933965075" observedRunningTime="2025-11-28 17:28:35.887382599 +0000 UTC m=+1817.936303504" watchObservedRunningTime="2025-11-28 17:28:35.890723895 +0000 UTC m=+1817.939644790" Nov 28 17:28:35 crc kubenswrapper[5024]: I1128 17:28:35.905876 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:36 crc kubenswrapper[5024]: I1128 17:28:36.397600 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-wmm7h"] Nov 28 17:28:36 crc kubenswrapper[5024]: I1128 17:28:36.511978 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f864af7-37e3-45ce-ba16-0a139c33831f" path="/var/lib/kubelet/pods/5f864af7-37e3-45ce-ba16-0a139c33831f/volumes" Nov 28 17:28:36 crc kubenswrapper[5024]: I1128 17:28:36.512639 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4319898-7040-4c0c-b5eb-d2eabe093afb" path="/var/lib/kubelet/pods/d4319898-7040-4c0c-b5eb-d2eabe093afb/volumes" Nov 28 17:28:36 crc kubenswrapper[5024]: I1128 17:28:36.513168 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda77528-b4c2-4529-9293-f5bf3c7aeb5a" path="/var/lib/kubelet/pods/eda77528-b4c2-4529-9293-f5bf3c7aeb5a/volumes" Nov 28 17:28:36 crc kubenswrapper[5024]: I1128 17:28:36.890657 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wmm7h" event={"ID":"8b59921a-033f-454b-afad-20ee4a3481e4","Type":"ContainerStarted","Data":"78c637b8bb105da207d684b78df113cefed00eb6fbd0ac0ef5d64a39022da402"} Nov 28 17:28:37 crc kubenswrapper[5024]: E1128 17:28:37.860130 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 17:28:37 crc kubenswrapper[5024]: E1128 17:28:37.862146 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 17:28:37 crc kubenswrapper[5024]: E1128 17:28:37.865181 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 17:28:37 crc kubenswrapper[5024]: E1128 17:28:37.865244 5024 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-586d869b9-5wnvb" podUID="58bfac75-cfac-4404-b44b-1ca7b1a94442" containerName="heat-engine" Nov 28 17:28:42 crc kubenswrapper[5024]: I1128 17:28:42.964291 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wmm7h" event={"ID":"8b59921a-033f-454b-afad-20ee4a3481e4","Type":"ContainerStarted","Data":"6ac7b192a69b799d0138dac2087dab1ca84f0c15b5330ee1531c1a191a6d23e2"} Nov 28 17:28:42 crc kubenswrapper[5024]: I1128 17:28:42.991750 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-wmm7h" podStartSLOduration=2.253759274 podStartE2EDuration="7.99173219s" podCreationTimestamp="2025-11-28 17:28:35 +0000 UTC" firstStartedPulling="2025-11-28 17:28:36.39304186 +0000 UTC m=+1818.441962765" lastFinishedPulling="2025-11-28 17:28:42.131014776 +0000 UTC m=+1824.179935681" observedRunningTime="2025-11-28 17:28:42.986614422 +0000 UTC m=+1825.035535327" watchObservedRunningTime="2025-11-28 17:28:42.99173219 +0000 UTC m=+1825.040653095" Nov 28 17:28:46 crc kubenswrapper[5024]: I1128 17:28:46.004793 5024 generic.go:334] "Generic (PLEG): container finished" podID="8b59921a-033f-454b-afad-20ee4a3481e4" containerID="6ac7b192a69b799d0138dac2087dab1ca84f0c15b5330ee1531c1a191a6d23e2" exitCode=0 Nov 28 17:28:46 crc kubenswrapper[5024]: I1128 17:28:46.004901 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wmm7h" event={"ID":"8b59921a-033f-454b-afad-20ee4a3481e4","Type":"ContainerDied","Data":"6ac7b192a69b799d0138dac2087dab1ca84f0c15b5330ee1531c1a191a6d23e2"} Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.018532 5024 generic.go:334] "Generic (PLEG): container finished" podID="a6f96dc0-0ac5-4a4a-a888-870195dca5d0" containerID="3e35e28754e95d222b0e6e327a255f3f4a15734b0f83f80d140fa6fdbc1e53ab" exitCode=0 Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.018640 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" event={"ID":"a6f96dc0-0ac5-4a4a-a888-870195dca5d0","Type":"ContainerDied","Data":"3e35e28754e95d222b0e6e327a255f3f4a15734b0f83f80d140fa6fdbc1e53ab"} Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.479661 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.498474 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:28:47 crc kubenswrapper[5024]: E1128 17:28:47.499041 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.549760 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82vf8\" (UniqueName: \"kubernetes.io/projected/8b59921a-033f-454b-afad-20ee4a3481e4-kube-api-access-82vf8\") pod \"8b59921a-033f-454b-afad-20ee4a3481e4\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.550367 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-scripts\") pod \"8b59921a-033f-454b-afad-20ee4a3481e4\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.550455 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-config-data\") pod \"8b59921a-033f-454b-afad-20ee4a3481e4\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.550561 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-combined-ca-bundle\") pod \"8b59921a-033f-454b-afad-20ee4a3481e4\" (UID: \"8b59921a-033f-454b-afad-20ee4a3481e4\") " Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.560116 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b59921a-033f-454b-afad-20ee4a3481e4-kube-api-access-82vf8" (OuterVolumeSpecName: "kube-api-access-82vf8") pod "8b59921a-033f-454b-afad-20ee4a3481e4" (UID: "8b59921a-033f-454b-afad-20ee4a3481e4"). InnerVolumeSpecName "kube-api-access-82vf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.560272 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-scripts" (OuterVolumeSpecName: "scripts") pod "8b59921a-033f-454b-afad-20ee4a3481e4" (UID: "8b59921a-033f-454b-afad-20ee4a3481e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.585699 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b59921a-033f-454b-afad-20ee4a3481e4" (UID: "8b59921a-033f-454b-afad-20ee4a3481e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.587611 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-config-data" (OuterVolumeSpecName: "config-data") pod "8b59921a-033f-454b-afad-20ee4a3481e4" (UID: "8b59921a-033f-454b-afad-20ee4a3481e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.657615 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.657653 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.657663 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b59921a-033f-454b-afad-20ee4a3481e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:47 crc kubenswrapper[5024]: I1128 17:28:47.657674 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82vf8\" (UniqueName: \"kubernetes.io/projected/8b59921a-033f-454b-afad-20ee4a3481e4-kube-api-access-82vf8\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:47 crc kubenswrapper[5024]: E1128 17:28:47.857376 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 17:28:47 crc kubenswrapper[5024]: E1128 17:28:47.858878 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 17:28:47 crc kubenswrapper[5024]: E1128 17:28:47.860489 5024 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 17:28:47 crc kubenswrapper[5024]: E1128 17:28:47.860525 5024 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-586d869b9-5wnvb" podUID="58bfac75-cfac-4404-b44b-1ca7b1a94442" containerName="heat-engine" Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.030452 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wmm7h" event={"ID":"8b59921a-033f-454b-afad-20ee4a3481e4","Type":"ContainerDied","Data":"78c637b8bb105da207d684b78df113cefed00eb6fbd0ac0ef5d64a39022da402"} Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.030504 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78c637b8bb105da207d684b78df113cefed00eb6fbd0ac0ef5d64a39022da402" Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.030470 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wmm7h" Nov 28 17:28:48 crc kubenswrapper[5024]: E1128 17:28:48.134195 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b59921a_033f_454b_afad_20ee4a3481e4.slice/crio-78c637b8bb105da207d684b78df113cefed00eb6fbd0ac0ef5d64a39022da402\": RecentStats: unable to find data in memory cache]" Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.357487 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.649004 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.698999 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-inventory\") pod \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.699190 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-repo-setup-combined-ca-bundle\") pod \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.699263 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q75p8\" (UniqueName: \"kubernetes.io/projected/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-kube-api-access-q75p8\") pod \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.699310 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-ssh-key\") pod \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\" (UID: \"a6f96dc0-0ac5-4a4a-a888-870195dca5d0\") " Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.710586 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a6f96dc0-0ac5-4a4a-a888-870195dca5d0" (UID: "a6f96dc0-0ac5-4a4a-a888-870195dca5d0"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.712699 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-kube-api-access-q75p8" (OuterVolumeSpecName: "kube-api-access-q75p8") pod "a6f96dc0-0ac5-4a4a-a888-870195dca5d0" (UID: "a6f96dc0-0ac5-4a4a-a888-870195dca5d0"). InnerVolumeSpecName "kube-api-access-q75p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.733637 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a6f96dc0-0ac5-4a4a-a888-870195dca5d0" (UID: "a6f96dc0-0ac5-4a4a-a888-870195dca5d0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.737223 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-inventory" (OuterVolumeSpecName: "inventory") pod "a6f96dc0-0ac5-4a4a-a888-870195dca5d0" (UID: "a6f96dc0-0ac5-4a4a-a888-870195dca5d0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.788055 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.802539 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.802573 5024 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.802585 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q75p8\" (UniqueName: \"kubernetes.io/projected/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-kube-api-access-q75p8\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:48 crc kubenswrapper[5024]: I1128 17:28:48.802599 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6f96dc0-0ac5-4a4a-a888-870195dca5d0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.108331 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" event={"ID":"a6f96dc0-0ac5-4a4a-a888-870195dca5d0","Type":"ContainerDied","Data":"1495eafe33d330f66a71be1b2149db1bbf36e390955824663e797501b9b3796d"} Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.108380 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1495eafe33d330f66a71be1b2149db1bbf36e390955824663e797501b9b3796d" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.108449 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.197520 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn"] Nov 28 17:28:49 crc kubenswrapper[5024]: E1128 17:28:49.198246 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b59921a-033f-454b-afad-20ee4a3481e4" containerName="aodh-db-sync" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.198264 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b59921a-033f-454b-afad-20ee4a3481e4" containerName="aodh-db-sync" Nov 28 17:28:49 crc kubenswrapper[5024]: E1128 17:28:49.198287 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f96dc0-0ac5-4a4a-a888-870195dca5d0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.198299 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f96dc0-0ac5-4a4a-a888-870195dca5d0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.198568 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f96dc0-0ac5-4a4a-a888-870195dca5d0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.198586 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b59921a-033f-454b-afad-20ee4a3481e4" containerName="aodh-db-sync" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.199676 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.220951 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn"] Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.225003 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84151dea-3c62-4ac2-a85d-55b7bafba2ac-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qv6fn\" (UID: \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.225384 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84151dea-3c62-4ac2-a85d-55b7bafba2ac-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qv6fn\" (UID: \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.225486 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnp46\" (UniqueName: \"kubernetes.io/projected/84151dea-3c62-4ac2-a85d-55b7bafba2ac-kube-api-access-lnp46\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qv6fn\" (UID: \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.225949 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.226134 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.226240 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.226764 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.335847 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84151dea-3c62-4ac2-a85d-55b7bafba2ac-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qv6fn\" (UID: \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.338417 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnp46\" (UniqueName: \"kubernetes.io/projected/84151dea-3c62-4ac2-a85d-55b7bafba2ac-kube-api-access-lnp46\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qv6fn\" (UID: \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.338746 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84151dea-3c62-4ac2-a85d-55b7bafba2ac-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qv6fn\" (UID: \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.360403 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.360770 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-api" containerID="cri-o://dac23869b1289bb8fbec39dcebab8b98a0621be86532f7ee1b00735a86d98a58" gracePeriod=30 Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.361544 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-listener" containerID="cri-o://0d8d11298432d40baba87a5d8e159b7d94777f3cccbcfc09f3dce60aff49aca0" gracePeriod=30 Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.361599 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-notifier" containerID="cri-o://12125e8db7eb6002da74d08541a8bba33419348a59183974570d98f44a5b5765" gracePeriod=30 Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.361629 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-evaluator" containerID="cri-o://062af754ed65bc2e923d006f61d93f3298b86daf2ec7cd8afc5e8819a4b504cc" gracePeriod=30 Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.364439 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84151dea-3c62-4ac2-a85d-55b7bafba2ac-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qv6fn\" (UID: \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.372110 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84151dea-3c62-4ac2-a85d-55b7bafba2ac-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qv6fn\" (UID: \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.375951 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnp46\" (UniqueName: \"kubernetes.io/projected/84151dea-3c62-4ac2-a85d-55b7bafba2ac-kube-api-access-lnp46\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qv6fn\" (UID: \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" Nov 28 17:28:49 crc kubenswrapper[5024]: I1128 17:28:49.561055 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" Nov 28 17:28:50 crc kubenswrapper[5024]: I1128 17:28:50.122502 5024 generic.go:334] "Generic (PLEG): container finished" podID="2e9856a4-36be-4430-a239-6a83871dd474" containerID="062af754ed65bc2e923d006f61d93f3298b86daf2ec7cd8afc5e8819a4b504cc" exitCode=0 Nov 28 17:28:50 crc kubenswrapper[5024]: I1128 17:28:50.122956 5024 generic.go:334] "Generic (PLEG): container finished" podID="2e9856a4-36be-4430-a239-6a83871dd474" containerID="dac23869b1289bb8fbec39dcebab8b98a0621be86532f7ee1b00735a86d98a58" exitCode=0 Nov 28 17:28:50 crc kubenswrapper[5024]: I1128 17:28:50.122567 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e9856a4-36be-4430-a239-6a83871dd474","Type":"ContainerDied","Data":"062af754ed65bc2e923d006f61d93f3298b86daf2ec7cd8afc5e8819a4b504cc"} Nov 28 17:28:50 crc kubenswrapper[5024]: I1128 17:28:50.122998 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e9856a4-36be-4430-a239-6a83871dd474","Type":"ContainerDied","Data":"dac23869b1289bb8fbec39dcebab8b98a0621be86532f7ee1b00735a86d98a58"} Nov 28 17:28:50 crc kubenswrapper[5024]: I1128 17:28:50.201649 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn"] Nov 28 17:28:51 crc kubenswrapper[5024]: I1128 17:28:51.135632 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" event={"ID":"84151dea-3c62-4ac2-a85d-55b7bafba2ac","Type":"ContainerStarted","Data":"279d8accfd6dcff5751c7c220334393204ba23eeed8c2262e922b0ff1e6a156f"} Nov 28 17:28:51 crc kubenswrapper[5024]: I1128 17:28:51.135951 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" event={"ID":"84151dea-3c62-4ac2-a85d-55b7bafba2ac","Type":"ContainerStarted","Data":"7e78989c46f3ba56653b23a34532fb3d3d52f9553093e66f9bd7aaed9a25aaab"} Nov 28 17:28:51 crc kubenswrapper[5024]: I1128 17:28:51.180565 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" podStartSLOduration=1.7151142940000002 podStartE2EDuration="2.180542498s" podCreationTimestamp="2025-11-28 17:28:49 +0000 UTC" firstStartedPulling="2025-11-28 17:28:50.204603581 +0000 UTC m=+1832.253524486" lastFinishedPulling="2025-11-28 17:28:50.670031785 +0000 UTC m=+1832.718952690" observedRunningTime="2025-11-28 17:28:51.16441729 +0000 UTC m=+1833.213338195" watchObservedRunningTime="2025-11-28 17:28:51.180542498 +0000 UTC m=+1833.229463403" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.123818 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.172248 5024 generic.go:334] "Generic (PLEG): container finished" podID="2e9856a4-36be-4430-a239-6a83871dd474" containerID="0d8d11298432d40baba87a5d8e159b7d94777f3cccbcfc09f3dce60aff49aca0" exitCode=0 Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.172298 5024 generic.go:334] "Generic (PLEG): container finished" podID="2e9856a4-36be-4430-a239-6a83871dd474" containerID="12125e8db7eb6002da74d08541a8bba33419348a59183974570d98f44a5b5765" exitCode=0 Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.172360 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e9856a4-36be-4430-a239-6a83871dd474","Type":"ContainerDied","Data":"0d8d11298432d40baba87a5d8e159b7d94777f3cccbcfc09f3dce60aff49aca0"} Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.172388 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e9856a4-36be-4430-a239-6a83871dd474","Type":"ContainerDied","Data":"12125e8db7eb6002da74d08541a8bba33419348a59183974570d98f44a5b5765"} Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.189510 5024 generic.go:334] "Generic (PLEG): container finished" podID="58bfac75-cfac-4404-b44b-1ca7b1a94442" containerID="15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76" exitCode=0 Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.190653 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-586d869b9-5wnvb" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.191339 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-586d869b9-5wnvb" event={"ID":"58bfac75-cfac-4404-b44b-1ca7b1a94442","Type":"ContainerDied","Data":"15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76"} Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.191365 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-586d869b9-5wnvb" event={"ID":"58bfac75-cfac-4404-b44b-1ca7b1a94442","Type":"ContainerDied","Data":"a1fded5ac06ce7021198701da541a1154bdfb58976ff28b0de850030d006a83d"} Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.191383 5024 scope.go:117] "RemoveContainer" containerID="15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.214314 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-config-data\") pod \"58bfac75-cfac-4404-b44b-1ca7b1a94442\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.214365 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cghdg\" (UniqueName: \"kubernetes.io/projected/58bfac75-cfac-4404-b44b-1ca7b1a94442-kube-api-access-cghdg\") pod \"58bfac75-cfac-4404-b44b-1ca7b1a94442\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.224677 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58bfac75-cfac-4404-b44b-1ca7b1a94442-kube-api-access-cghdg" (OuterVolumeSpecName: "kube-api-access-cghdg") pod "58bfac75-cfac-4404-b44b-1ca7b1a94442" (UID: "58bfac75-cfac-4404-b44b-1ca7b1a94442"). InnerVolumeSpecName "kube-api-access-cghdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.233688 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.242631 5024 scope.go:117] "RemoveContainer" containerID="15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76" Nov 28 17:28:52 crc kubenswrapper[5024]: E1128 17:28:52.243109 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76\": container with ID starting with 15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76 not found: ID does not exist" containerID="15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.243196 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76"} err="failed to get container status \"15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76\": rpc error: code = NotFound desc = could not find container \"15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76\": container with ID starting with 15088484bf70190015d90e59b6a2d4fe4bc675525eca3f163cadc0cdb3b77e76 not found: ID does not exist" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.307055 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-config-data" (OuterVolumeSpecName: "config-data") pod "58bfac75-cfac-4404-b44b-1ca7b1a94442" (UID: "58bfac75-cfac-4404-b44b-1ca7b1a94442"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.317335 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-config-data-custom\") pod \"58bfac75-cfac-4404-b44b-1ca7b1a94442\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.317416 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-combined-ca-bundle\") pod \"58bfac75-cfac-4404-b44b-1ca7b1a94442\" (UID: \"58bfac75-cfac-4404-b44b-1ca7b1a94442\") " Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.318491 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.318515 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cghdg\" (UniqueName: \"kubernetes.io/projected/58bfac75-cfac-4404-b44b-1ca7b1a94442-kube-api-access-cghdg\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.324914 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "58bfac75-cfac-4404-b44b-1ca7b1a94442" (UID: "58bfac75-cfac-4404-b44b-1ca7b1a94442"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.350052 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58bfac75-cfac-4404-b44b-1ca7b1a94442" (UID: "58bfac75-cfac-4404-b44b-1ca7b1a94442"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.419628 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-config-data\") pod \"2e9856a4-36be-4430-a239-6a83871dd474\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.419727 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-scripts\") pod \"2e9856a4-36be-4430-a239-6a83871dd474\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.419853 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-combined-ca-bundle\") pod \"2e9856a4-36be-4430-a239-6a83871dd474\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.419921 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-public-tls-certs\") pod \"2e9856a4-36be-4430-a239-6a83871dd474\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.420041 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tll8z\" (UniqueName: \"kubernetes.io/projected/2e9856a4-36be-4430-a239-6a83871dd474-kube-api-access-tll8z\") pod \"2e9856a4-36be-4430-a239-6a83871dd474\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.420118 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-internal-tls-certs\") pod \"2e9856a4-36be-4430-a239-6a83871dd474\" (UID: \"2e9856a4-36be-4430-a239-6a83871dd474\") " Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.422361 5024 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.422428 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58bfac75-cfac-4404-b44b-1ca7b1a94442-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.423871 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-scripts" (OuterVolumeSpecName: "scripts") pod "2e9856a4-36be-4430-a239-6a83871dd474" (UID: "2e9856a4-36be-4430-a239-6a83871dd474"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.424490 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9856a4-36be-4430-a239-6a83871dd474-kube-api-access-tll8z" (OuterVolumeSpecName: "kube-api-access-tll8z") pod "2e9856a4-36be-4430-a239-6a83871dd474" (UID: "2e9856a4-36be-4430-a239-6a83871dd474"). InnerVolumeSpecName "kube-api-access-tll8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.483507 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2e9856a4-36be-4430-a239-6a83871dd474" (UID: "2e9856a4-36be-4430-a239-6a83871dd474"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.503692 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2e9856a4-36be-4430-a239-6a83871dd474" (UID: "2e9856a4-36be-4430-a239-6a83871dd474"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.526861 5024 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.526896 5024 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.526910 5024 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.526922 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tll8z\" (UniqueName: \"kubernetes.io/projected/2e9856a4-36be-4430-a239-6a83871dd474-kube-api-access-tll8z\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.563645 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e9856a4-36be-4430-a239-6a83871dd474" (UID: "2e9856a4-36be-4430-a239-6a83871dd474"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.619823 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-config-data" (OuterVolumeSpecName: "config-data") pod "2e9856a4-36be-4430-a239-6a83871dd474" (UID: "2e9856a4-36be-4430-a239-6a83871dd474"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.630209 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.630247 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9856a4-36be-4430-a239-6a83871dd474-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.693839 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-586d869b9-5wnvb"] Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.693882 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-586d869b9-5wnvb"] Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.921516 5024 scope.go:117] "RemoveContainer" containerID="ce2e278d3f1707f10d9ad89dabc644167a10172820b7b2bcd7269601353f016a" Nov 28 17:28:52 crc kubenswrapper[5024]: I1128 17:28:52.946094 5024 scope.go:117] "RemoveContainer" containerID="cb5042ec4d2a9b6dcd9182dd7d36a7d8993c984c37eb1ffeba6cb799e1f9b6ab" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.202536 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e9856a4-36be-4430-a239-6a83871dd474","Type":"ContainerDied","Data":"e0bac06a3eb611f04c591fa7361cb93a70bd4f42edeaac05d7c957e14abf99d5"} Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.202593 5024 scope.go:117] "RemoveContainer" containerID="0d8d11298432d40baba87a5d8e159b7d94777f3cccbcfc09f3dce60aff49aca0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.202607 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.230702 5024 scope.go:117] "RemoveContainer" containerID="12125e8db7eb6002da74d08541a8bba33419348a59183974570d98f44a5b5765" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.245615 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.258742 5024 scope.go:117] "RemoveContainer" containerID="062af754ed65bc2e923d006f61d93f3298b86daf2ec7cd8afc5e8819a4b504cc" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.276340 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.292418 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 28 17:28:53 crc kubenswrapper[5024]: E1128 17:28:53.293143 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58bfac75-cfac-4404-b44b-1ca7b1a94442" containerName="heat-engine" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.293167 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="58bfac75-cfac-4404-b44b-1ca7b1a94442" containerName="heat-engine" Nov 28 17:28:53 crc kubenswrapper[5024]: E1128 17:28:53.293180 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-api" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.293186 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-api" Nov 28 17:28:53 crc kubenswrapper[5024]: E1128 17:28:53.293203 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-notifier" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.293210 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-notifier" Nov 28 17:28:53 crc kubenswrapper[5024]: E1128 17:28:53.293234 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-evaluator" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.293242 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-evaluator" Nov 28 17:28:53 crc kubenswrapper[5024]: E1128 17:28:53.293283 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-listener" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.293290 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-listener" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.293571 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-evaluator" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.293598 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-notifier" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.293649 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-listener" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.293663 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="58bfac75-cfac-4404-b44b-1ca7b1a94442" containerName="heat-engine" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.293688 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9856a4-36be-4430-a239-6a83871dd474" containerName="aodh-api" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.294631 5024 scope.go:117] "RemoveContainer" containerID="dac23869b1289bb8fbec39dcebab8b98a0621be86532f7ee1b00735a86d98a58" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.296482 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.299342 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.299573 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.299590 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.300084 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-rjjzq" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.301538 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.307659 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.450668 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.450927 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-public-tls-certs\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.450980 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-internal-tls-certs\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.451000 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-config-data\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.451080 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr7zx\" (UniqueName: \"kubernetes.io/projected/3363602b-4e31-4813-b443-e8bc9468059c-kube-api-access-gr7zx\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.451135 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-scripts\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.553544 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.554297 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-public-tls-certs\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.554495 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-internal-tls-certs\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.554552 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-config-data\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.554708 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr7zx\" (UniqueName: \"kubernetes.io/projected/3363602b-4e31-4813-b443-e8bc9468059c-kube-api-access-gr7zx\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.554845 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-scripts\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.560158 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-internal-tls-certs\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.560205 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.560582 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-public-tls-certs\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.560913 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-config-data\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.564829 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3363602b-4e31-4813-b443-e8bc9468059c-scripts\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.573907 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr7zx\" (UniqueName: \"kubernetes.io/projected/3363602b-4e31-4813-b443-e8bc9468059c-kube-api-access-gr7zx\") pod \"aodh-0\" (UID: \"3363602b-4e31-4813-b443-e8bc9468059c\") " pod="openstack/aodh-0" Nov 28 17:28:53 crc kubenswrapper[5024]: I1128 17:28:53.627760 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 17:28:54 crc kubenswrapper[5024]: E1128 17:28:54.156530 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84151dea_3c62_4ac2_a85d_55b7bafba2ac.slice/crio-279d8accfd6dcff5751c7c220334393204ba23eeed8c2262e922b0ff1e6a156f.scope\": RecentStats: unable to find data in memory cache]" Nov 28 17:28:54 crc kubenswrapper[5024]: I1128 17:28:54.194887 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 17:28:54 crc kubenswrapper[5024]: I1128 17:28:54.243437 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3363602b-4e31-4813-b443-e8bc9468059c","Type":"ContainerStarted","Data":"f5413fc4a95f97e9fc864059c9342e06a3c5f1a5a922820e8bf341b895e7019c"} Nov 28 17:28:54 crc kubenswrapper[5024]: I1128 17:28:54.246541 5024 generic.go:334] "Generic (PLEG): container finished" podID="84151dea-3c62-4ac2-a85d-55b7bafba2ac" containerID="279d8accfd6dcff5751c7c220334393204ba23eeed8c2262e922b0ff1e6a156f" exitCode=0 Nov 28 17:28:54 crc kubenswrapper[5024]: I1128 17:28:54.246612 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" event={"ID":"84151dea-3c62-4ac2-a85d-55b7bafba2ac","Type":"ContainerDied","Data":"279d8accfd6dcff5751c7c220334393204ba23eeed8c2262e922b0ff1e6a156f"} Nov 28 17:28:54 crc kubenswrapper[5024]: I1128 17:28:54.518521 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e9856a4-36be-4430-a239-6a83871dd474" path="/var/lib/kubelet/pods/2e9856a4-36be-4430-a239-6a83871dd474/volumes" Nov 28 17:28:54 crc kubenswrapper[5024]: I1128 17:28:54.519548 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58bfac75-cfac-4404-b44b-1ca7b1a94442" path="/var/lib/kubelet/pods/58bfac75-cfac-4404-b44b-1ca7b1a94442/volumes" Nov 28 17:28:55 crc kubenswrapper[5024]: I1128 17:28:55.262675 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3363602b-4e31-4813-b443-e8bc9468059c","Type":"ContainerStarted","Data":"418aab3a33b0d2d94a7c69f1e2007a7d16b0dcbc9ba7e15425b14cc1a3e02d6d"} Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.013025 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.147585 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnp46\" (UniqueName: \"kubernetes.io/projected/84151dea-3c62-4ac2-a85d-55b7bafba2ac-kube-api-access-lnp46\") pod \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\" (UID: \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\") " Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.147854 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84151dea-3c62-4ac2-a85d-55b7bafba2ac-inventory\") pod \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\" (UID: \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\") " Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.147906 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84151dea-3c62-4ac2-a85d-55b7bafba2ac-ssh-key\") pod \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\" (UID: \"84151dea-3c62-4ac2-a85d-55b7bafba2ac\") " Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.161409 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84151dea-3c62-4ac2-a85d-55b7bafba2ac-kube-api-access-lnp46" (OuterVolumeSpecName: "kube-api-access-lnp46") pod "84151dea-3c62-4ac2-a85d-55b7bafba2ac" (UID: "84151dea-3c62-4ac2-a85d-55b7bafba2ac"). InnerVolumeSpecName "kube-api-access-lnp46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.180747 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84151dea-3c62-4ac2-a85d-55b7bafba2ac-inventory" (OuterVolumeSpecName: "inventory") pod "84151dea-3c62-4ac2-a85d-55b7bafba2ac" (UID: "84151dea-3c62-4ac2-a85d-55b7bafba2ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.181403 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84151dea-3c62-4ac2-a85d-55b7bafba2ac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "84151dea-3c62-4ac2-a85d-55b7bafba2ac" (UID: "84151dea-3c62-4ac2-a85d-55b7bafba2ac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.250536 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84151dea-3c62-4ac2-a85d-55b7bafba2ac-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.250565 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84151dea-3c62-4ac2-a85d-55b7bafba2ac-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.250575 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnp46\" (UniqueName: \"kubernetes.io/projected/84151dea-3c62-4ac2-a85d-55b7bafba2ac-kube-api-access-lnp46\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.277455 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3363602b-4e31-4813-b443-e8bc9468059c","Type":"ContainerStarted","Data":"a3275e840f71faffa52d89087525f2ec7b6e9a71907be7b3d64d0c1419a7f602"} Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.279704 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" event={"ID":"84151dea-3c62-4ac2-a85d-55b7bafba2ac","Type":"ContainerDied","Data":"7e78989c46f3ba56653b23a34532fb3d3d52f9553093e66f9bd7aaed9a25aaab"} Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.279743 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e78989c46f3ba56653b23a34532fb3d3d52f9553093e66f9bd7aaed9a25aaab" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.279756 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qv6fn" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.410360 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc"] Nov 28 17:28:56 crc kubenswrapper[5024]: E1128 17:28:56.411180 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84151dea-3c62-4ac2-a85d-55b7bafba2ac" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.411194 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="84151dea-3c62-4ac2-a85d-55b7bafba2ac" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.411437 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="84151dea-3c62-4ac2-a85d-55b7bafba2ac" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.412328 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.415291 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.415523 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.415689 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.415839 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.425631 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc"] Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.561361 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.561504 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.561776 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ptrl\" (UniqueName: \"kubernetes.io/projected/c2e066c9-5f85-4782-9317-546bcc3457e8-kube-api-access-2ptrl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.561981 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.665209 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ptrl\" (UniqueName: \"kubernetes.io/projected/c2e066c9-5f85-4782-9317-546bcc3457e8-kube-api-access-2ptrl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.665393 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.665703 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.665778 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.672371 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.674813 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.674894 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.684366 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ptrl\" (UniqueName: \"kubernetes.io/projected/c2e066c9-5f85-4782-9317-546bcc3457e8-kube-api-access-2ptrl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:28:56 crc kubenswrapper[5024]: I1128 17:28:56.737513 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:28:57 crc kubenswrapper[5024]: I1128 17:28:57.759589 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc"] Nov 28 17:28:58 crc kubenswrapper[5024]: I1128 17:28:58.324777 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3363602b-4e31-4813-b443-e8bc9468059c","Type":"ContainerStarted","Data":"9b57e8f2b34d90300a72d1176cd8f6b62ca59a3ea11f8873071c8a69c2a5eca9"} Nov 28 17:28:58 crc kubenswrapper[5024]: I1128 17:28:58.326565 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" event={"ID":"c2e066c9-5f85-4782-9317-546bcc3457e8","Type":"ContainerStarted","Data":"210ba62d14c725b0212f903502f5cbda17c57937012d1764052fb22b516a11f2"} Nov 28 17:28:58 crc kubenswrapper[5024]: I1128 17:28:58.517864 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:28:58 crc kubenswrapper[5024]: E1128 17:28:58.518488 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:28:59 crc kubenswrapper[5024]: I1128 17:28:59.342321 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3363602b-4e31-4813-b443-e8bc9468059c","Type":"ContainerStarted","Data":"824986e1d128d9c718574788cfda81148f18c9b399431c08520cd009a9c50a0e"} Nov 28 17:28:59 crc kubenswrapper[5024]: I1128 17:28:59.346423 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" event={"ID":"c2e066c9-5f85-4782-9317-546bcc3457e8","Type":"ContainerStarted","Data":"b363bd476f7432c67afc82618ac23192b9434d5ffec7252fd6dbc425da5fe89c"} Nov 28 17:28:59 crc kubenswrapper[5024]: I1128 17:28:59.371938 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.79638154 podStartE2EDuration="6.37191233s" podCreationTimestamp="2025-11-28 17:28:53 +0000 UTC" firstStartedPulling="2025-11-28 17:28:54.213032375 +0000 UTC m=+1836.261953290" lastFinishedPulling="2025-11-28 17:28:58.788563175 +0000 UTC m=+1840.837484080" observedRunningTime="2025-11-28 17:28:59.362125796 +0000 UTC m=+1841.411046701" watchObservedRunningTime="2025-11-28 17:28:59.37191233 +0000 UTC m=+1841.420833235" Nov 28 17:28:59 crc kubenswrapper[5024]: I1128 17:28:59.398494 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" podStartSLOduration=2.432650416 podStartE2EDuration="3.398477451s" podCreationTimestamp="2025-11-28 17:28:56 +0000 UTC" firstStartedPulling="2025-11-28 17:28:57.822211165 +0000 UTC m=+1839.871132080" lastFinishedPulling="2025-11-28 17:28:58.78803821 +0000 UTC m=+1840.836959115" observedRunningTime="2025-11-28 17:28:59.388965805 +0000 UTC m=+1841.437886710" watchObservedRunningTime="2025-11-28 17:28:59.398477451 +0000 UTC m=+1841.447398356" Nov 28 17:29:13 crc kubenswrapper[5024]: I1128 17:29:13.498897 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:29:13 crc kubenswrapper[5024]: E1128 17:29:13.499883 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:29:24 crc kubenswrapper[5024]: I1128 17:29:24.498197 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:29:24 crc kubenswrapper[5024]: E1128 17:29:24.499082 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:29:36 crc kubenswrapper[5024]: I1128 17:29:36.498108 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:29:36 crc kubenswrapper[5024]: E1128 17:29:36.499236 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:29:38 crc kubenswrapper[5024]: I1128 17:29:38.710258 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r9fxw"] Nov 28 17:29:38 crc kubenswrapper[5024]: I1128 17:29:38.719544 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:29:38 crc kubenswrapper[5024]: I1128 17:29:38.728939 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9fxw"] Nov 28 17:29:38 crc kubenswrapper[5024]: I1128 17:29:38.803551 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f432326-1cdb-4caf-a0d0-c25304f63d47-utilities\") pod \"certified-operators-r9fxw\" (UID: \"4f432326-1cdb-4caf-a0d0-c25304f63d47\") " pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:29:38 crc kubenswrapper[5024]: I1128 17:29:38.803880 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94gjd\" (UniqueName: \"kubernetes.io/projected/4f432326-1cdb-4caf-a0d0-c25304f63d47-kube-api-access-94gjd\") pod \"certified-operators-r9fxw\" (UID: \"4f432326-1cdb-4caf-a0d0-c25304f63d47\") " pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:29:38 crc kubenswrapper[5024]: I1128 17:29:38.804164 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f432326-1cdb-4caf-a0d0-c25304f63d47-catalog-content\") pod \"certified-operators-r9fxw\" (UID: \"4f432326-1cdb-4caf-a0d0-c25304f63d47\") " pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:29:38 crc kubenswrapper[5024]: I1128 17:29:38.906935 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f432326-1cdb-4caf-a0d0-c25304f63d47-catalog-content\") pod \"certified-operators-r9fxw\" (UID: \"4f432326-1cdb-4caf-a0d0-c25304f63d47\") " pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:29:38 crc kubenswrapper[5024]: I1128 17:29:38.907011 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f432326-1cdb-4caf-a0d0-c25304f63d47-utilities\") pod \"certified-operators-r9fxw\" (UID: \"4f432326-1cdb-4caf-a0d0-c25304f63d47\") " pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:29:38 crc kubenswrapper[5024]: I1128 17:29:38.907105 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94gjd\" (UniqueName: \"kubernetes.io/projected/4f432326-1cdb-4caf-a0d0-c25304f63d47-kube-api-access-94gjd\") pod \"certified-operators-r9fxw\" (UID: \"4f432326-1cdb-4caf-a0d0-c25304f63d47\") " pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:29:38 crc kubenswrapper[5024]: I1128 17:29:38.907573 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f432326-1cdb-4caf-a0d0-c25304f63d47-catalog-content\") pod \"certified-operators-r9fxw\" (UID: \"4f432326-1cdb-4caf-a0d0-c25304f63d47\") " pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:29:38 crc kubenswrapper[5024]: I1128 17:29:38.907585 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f432326-1cdb-4caf-a0d0-c25304f63d47-utilities\") pod \"certified-operators-r9fxw\" (UID: \"4f432326-1cdb-4caf-a0d0-c25304f63d47\") " pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:29:38 crc kubenswrapper[5024]: I1128 17:29:38.925640 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94gjd\" (UniqueName: \"kubernetes.io/projected/4f432326-1cdb-4caf-a0d0-c25304f63d47-kube-api-access-94gjd\") pod \"certified-operators-r9fxw\" (UID: \"4f432326-1cdb-4caf-a0d0-c25304f63d47\") " pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:29:39 crc kubenswrapper[5024]: I1128 17:29:39.063186 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:29:39 crc kubenswrapper[5024]: I1128 17:29:39.882685 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9fxw"] Nov 28 17:29:40 crc kubenswrapper[5024]: I1128 17:29:40.837991 5024 generic.go:334] "Generic (PLEG): container finished" podID="4f432326-1cdb-4caf-a0d0-c25304f63d47" containerID="bea5eef09458be01827075c972dc074d0736158791303a0b2b8b32b11a7393b1" exitCode=0 Nov 28 17:29:40 crc kubenswrapper[5024]: I1128 17:29:40.838059 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9fxw" event={"ID":"4f432326-1cdb-4caf-a0d0-c25304f63d47","Type":"ContainerDied","Data":"bea5eef09458be01827075c972dc074d0736158791303a0b2b8b32b11a7393b1"} Nov 28 17:29:40 crc kubenswrapper[5024]: I1128 17:29:40.838304 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9fxw" event={"ID":"4f432326-1cdb-4caf-a0d0-c25304f63d47","Type":"ContainerStarted","Data":"fe02c1baf294516aeec040abf4b1f32613fbc3e33b2d1b919193fa60f9a74601"} Nov 28 17:29:49 crc kubenswrapper[5024]: I1128 17:29:49.498319 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:29:49 crc kubenswrapper[5024]: E1128 17:29:49.499290 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:29:50 crc kubenswrapper[5024]: I1128 17:29:50.958244 5024 generic.go:334] "Generic (PLEG): container finished" podID="4f432326-1cdb-4caf-a0d0-c25304f63d47" containerID="9e21d4cacd22a4783a8b351261b6391269bfe5acd93553470a1d8fd8dc4443c4" exitCode=0 Nov 28 17:29:50 crc kubenswrapper[5024]: I1128 17:29:50.958348 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9fxw" event={"ID":"4f432326-1cdb-4caf-a0d0-c25304f63d47","Type":"ContainerDied","Data":"9e21d4cacd22a4783a8b351261b6391269bfe5acd93553470a1d8fd8dc4443c4"} Nov 28 17:29:51 crc kubenswrapper[5024]: I1128 17:29:51.972062 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9fxw" event={"ID":"4f432326-1cdb-4caf-a0d0-c25304f63d47","Type":"ContainerStarted","Data":"d387370bfd9e1fa6366e5c185b70cef18ff39408dd7297d263793c8e1d2c3dba"} Nov 28 17:29:51 crc kubenswrapper[5024]: I1128 17:29:51.995326 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r9fxw" podStartSLOduration=3.36650424 podStartE2EDuration="13.995307403s" podCreationTimestamp="2025-11-28 17:29:38 +0000 UTC" firstStartedPulling="2025-11-28 17:29:40.84020655 +0000 UTC m=+1882.889127445" lastFinishedPulling="2025-11-28 17:29:51.469009703 +0000 UTC m=+1893.517930608" observedRunningTime="2025-11-28 17:29:51.988781444 +0000 UTC m=+1894.037702349" watchObservedRunningTime="2025-11-28 17:29:51.995307403 +0000 UTC m=+1894.044228298" Nov 28 17:29:53 crc kubenswrapper[5024]: I1128 17:29:53.154575 5024 scope.go:117] "RemoveContainer" containerID="82f69f7b7e9ee7387ced8e1aa87d6491efffe0f2dec97ef139107b8df63ef8e8" Nov 28 17:29:53 crc kubenswrapper[5024]: I1128 17:29:53.755408 5024 scope.go:117] "RemoveContainer" containerID="5e06065ce6d7b2c1f85ff98da035c9dc824ed8fd519c3136f7fd10e99950a85b" Nov 28 17:29:53 crc kubenswrapper[5024]: I1128 17:29:53.779766 5024 scope.go:117] "RemoveContainer" containerID="c64d3ed6fe34d3578fb2e3b55010dea4e69b48fd200e96d8a82c7df82889991c" Nov 28 17:29:53 crc kubenswrapper[5024]: I1128 17:29:53.812962 5024 scope.go:117] "RemoveContainer" containerID="3a8f40b037ce3bd2d5611352de6ae3e7dd3d44add89c632e74057caaa7410e2b" Nov 28 17:29:53 crc kubenswrapper[5024]: I1128 17:29:53.839320 5024 scope.go:117] "RemoveContainer" containerID="bf757829288d6b021bc184437632f82b85387f3458f8353c874e74d9ab14a1ea" Nov 28 17:29:53 crc kubenswrapper[5024]: I1128 17:29:53.870485 5024 scope.go:117] "RemoveContainer" containerID="8f60047233d3a2b49addc663cc7233ac40ef32f8aac275dc99dc5b688c91832f" Nov 28 17:29:53 crc kubenswrapper[5024]: I1128 17:29:53.894381 5024 scope.go:117] "RemoveContainer" containerID="0bb475ce26086a657e3b2554e7e4a9de5919013fec482f54b26765bd424f8b92" Nov 28 17:29:53 crc kubenswrapper[5024]: I1128 17:29:53.920637 5024 scope.go:117] "RemoveContainer" containerID="f6dae8af7c9bd65f23883ec145b935412c7390547cdac3b8cd42e249faf851da" Nov 28 17:29:53 crc kubenswrapper[5024]: I1128 17:29:53.945333 5024 scope.go:117] "RemoveContainer" containerID="715ed801f1e596f73d0112b81e786911311908f9e8d1751465ac8ba6857ef75e" Nov 28 17:29:59 crc kubenswrapper[5024]: I1128 17:29:59.063406 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:29:59 crc kubenswrapper[5024]: I1128 17:29:59.064158 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:29:59 crc kubenswrapper[5024]: I1128 17:29:59.114383 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.140405 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r9fxw" Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.170588 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn"] Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.173332 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.176433 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.176904 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.184908 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn"] Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.364410 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-config-volume\") pod \"collect-profiles-29405850-ssxdn\" (UID: \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.364500 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-secret-volume\") pod \"collect-profiles-29405850-ssxdn\" (UID: \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.364557 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9zwg\" (UniqueName: \"kubernetes.io/projected/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-kube-api-access-b9zwg\") pod \"collect-profiles-29405850-ssxdn\" (UID: \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.467861 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-config-volume\") pod \"collect-profiles-29405850-ssxdn\" (UID: \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.467923 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-secret-volume\") pod \"collect-profiles-29405850-ssxdn\" (UID: \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.467973 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9zwg\" (UniqueName: \"kubernetes.io/projected/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-kube-api-access-b9zwg\") pod \"collect-profiles-29405850-ssxdn\" (UID: \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.469302 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-config-volume\") pod \"collect-profiles-29405850-ssxdn\" (UID: \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.480748 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-secret-volume\") pod \"collect-profiles-29405850-ssxdn\" (UID: \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.494460 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9zwg\" (UniqueName: \"kubernetes.io/projected/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-kube-api-access-b9zwg\") pod \"collect-profiles-29405850-ssxdn\" (UID: \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" Nov 28 17:30:00 crc kubenswrapper[5024]: I1128 17:30:00.499335 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" Nov 28 17:30:01 crc kubenswrapper[5024]: I1128 17:30:01.058917 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn"] Nov 28 17:30:01 crc kubenswrapper[5024]: I1128 17:30:01.084114 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" event={"ID":"8ac17d18-79a2-48b3-8ea9-e1e84f472a51","Type":"ContainerStarted","Data":"11311cd92ed04352130753506ba7b6f08a19cfb46b9e0a2b040b73a1e27f085e"} Nov 28 17:30:02 crc kubenswrapper[5024]: I1128 17:30:02.097658 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" event={"ID":"8ac17d18-79a2-48b3-8ea9-e1e84f472a51","Type":"ContainerStarted","Data":"58f6c9808de01267a71d21d9e7d987d236c2f2fc3c1792f09f44f08e89daee43"} Nov 28 17:30:02 crc kubenswrapper[5024]: I1128 17:30:02.123595 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" podStartSLOduration=2.123578073 podStartE2EDuration="2.123578073s" podCreationTimestamp="2025-11-28 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:30:02.112817881 +0000 UTC m=+1904.161738786" watchObservedRunningTime="2025-11-28 17:30:02.123578073 +0000 UTC m=+1904.172498978" Nov 28 17:30:02 crc kubenswrapper[5024]: I1128 17:30:02.229959 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9fxw"] Nov 28 17:30:02 crc kubenswrapper[5024]: I1128 17:30:02.810798 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kcnqq"] Nov 28 17:30:02 crc kubenswrapper[5024]: I1128 17:30:02.811032 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kcnqq" podUID="a5d05e6a-edfa-4707-959c-c3997debbed1" containerName="registry-server" containerID="cri-o://588bed846a220a168eb3e4c654d5df623dd6f6cff7239a47ce4bd3543436b15f" gracePeriod=2 Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.158028 5024 generic.go:334] "Generic (PLEG): container finished" podID="8ac17d18-79a2-48b3-8ea9-e1e84f472a51" containerID="58f6c9808de01267a71d21d9e7d987d236c2f2fc3c1792f09f44f08e89daee43" exitCode=0 Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.158336 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" event={"ID":"8ac17d18-79a2-48b3-8ea9-e1e84f472a51","Type":"ContainerDied","Data":"58f6c9808de01267a71d21d9e7d987d236c2f2fc3c1792f09f44f08e89daee43"} Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.172488 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcnqq" event={"ID":"a5d05e6a-edfa-4707-959c-c3997debbed1","Type":"ContainerDied","Data":"588bed846a220a168eb3e4c654d5df623dd6f6cff7239a47ce4bd3543436b15f"} Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.172518 5024 generic.go:334] "Generic (PLEG): container finished" podID="a5d05e6a-edfa-4707-959c-c3997debbed1" containerID="588bed846a220a168eb3e4c654d5df623dd6f6cff7239a47ce4bd3543436b15f" exitCode=0 Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.498308 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:30:03 crc kubenswrapper[5024]: E1128 17:30:03.498742 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.524389 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.576363 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flgtp\" (UniqueName: \"kubernetes.io/projected/a5d05e6a-edfa-4707-959c-c3997debbed1-kube-api-access-flgtp\") pod \"a5d05e6a-edfa-4707-959c-c3997debbed1\" (UID: \"a5d05e6a-edfa-4707-959c-c3997debbed1\") " Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.576529 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5d05e6a-edfa-4707-959c-c3997debbed1-utilities\") pod \"a5d05e6a-edfa-4707-959c-c3997debbed1\" (UID: \"a5d05e6a-edfa-4707-959c-c3997debbed1\") " Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.576587 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5d05e6a-edfa-4707-959c-c3997debbed1-catalog-content\") pod \"a5d05e6a-edfa-4707-959c-c3997debbed1\" (UID: \"a5d05e6a-edfa-4707-959c-c3997debbed1\") " Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.579373 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d05e6a-edfa-4707-959c-c3997debbed1-utilities" (OuterVolumeSpecName: "utilities") pod "a5d05e6a-edfa-4707-959c-c3997debbed1" (UID: "a5d05e6a-edfa-4707-959c-c3997debbed1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.590413 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d05e6a-edfa-4707-959c-c3997debbed1-kube-api-access-flgtp" (OuterVolumeSpecName: "kube-api-access-flgtp") pod "a5d05e6a-edfa-4707-959c-c3997debbed1" (UID: "a5d05e6a-edfa-4707-959c-c3997debbed1"). InnerVolumeSpecName "kube-api-access-flgtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.679482 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flgtp\" (UniqueName: \"kubernetes.io/projected/a5d05e6a-edfa-4707-959c-c3997debbed1-kube-api-access-flgtp\") on node \"crc\" DevicePath \"\"" Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.679511 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5d05e6a-edfa-4707-959c-c3997debbed1-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.688751 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d05e6a-edfa-4707-959c-c3997debbed1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5d05e6a-edfa-4707-959c-c3997debbed1" (UID: "a5d05e6a-edfa-4707-959c-c3997debbed1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:30:03 crc kubenswrapper[5024]: I1128 17:30:03.782841 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5d05e6a-edfa-4707-959c-c3997debbed1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.192663 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kcnqq" Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.192771 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kcnqq" event={"ID":"a5d05e6a-edfa-4707-959c-c3997debbed1","Type":"ContainerDied","Data":"346b10e8d8f800788a2657c06e2dfd7108fdb00afc95744844c8c878a32ceca2"} Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.192823 5024 scope.go:117] "RemoveContainer" containerID="588bed846a220a168eb3e4c654d5df623dd6f6cff7239a47ce4bd3543436b15f" Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.240569 5024 scope.go:117] "RemoveContainer" containerID="7817104fcdf47d8678ee0fb55a2ad20ac63a50a76306540d0a6ceb81c823e546" Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.264862 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kcnqq"] Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.296439 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kcnqq"] Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.386291 5024 scope.go:117] "RemoveContainer" containerID="0e0bfd592e07d2f7085b042406b0a8f80e50b56dd98f425e59d4ab21a584716b" Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.533855 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5d05e6a-edfa-4707-959c-c3997debbed1" path="/var/lib/kubelet/pods/a5d05e6a-edfa-4707-959c-c3997debbed1/volumes" Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.812504 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.965521 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-secret-volume\") pod \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\" (UID: \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\") " Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.965888 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-config-volume\") pod \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\" (UID: \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\") " Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.965963 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9zwg\" (UniqueName: \"kubernetes.io/projected/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-kube-api-access-b9zwg\") pod \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\" (UID: \"8ac17d18-79a2-48b3-8ea9-e1e84f472a51\") " Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.966764 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-config-volume" (OuterVolumeSpecName: "config-volume") pod "8ac17d18-79a2-48b3-8ea9-e1e84f472a51" (UID: "8ac17d18-79a2-48b3-8ea9-e1e84f472a51"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.981623 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8ac17d18-79a2-48b3-8ea9-e1e84f472a51" (UID: "8ac17d18-79a2-48b3-8ea9-e1e84f472a51"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:30:04 crc kubenswrapper[5024]: I1128 17:30:04.981811 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-kube-api-access-b9zwg" (OuterVolumeSpecName: "kube-api-access-b9zwg") pod "8ac17d18-79a2-48b3-8ea9-e1e84f472a51" (UID: "8ac17d18-79a2-48b3-8ea9-e1e84f472a51"). InnerVolumeSpecName "kube-api-access-b9zwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:30:05 crc kubenswrapper[5024]: I1128 17:30:05.068947 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:30:05 crc kubenswrapper[5024]: I1128 17:30:05.068990 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:30:05 crc kubenswrapper[5024]: I1128 17:30:05.069005 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9zwg\" (UniqueName: \"kubernetes.io/projected/8ac17d18-79a2-48b3-8ea9-e1e84f472a51-kube-api-access-b9zwg\") on node \"crc\" DevicePath \"\"" Nov 28 17:30:05 crc kubenswrapper[5024]: I1128 17:30:05.206843 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" event={"ID":"8ac17d18-79a2-48b3-8ea9-e1e84f472a51","Type":"ContainerDied","Data":"11311cd92ed04352130753506ba7b6f08a19cfb46b9e0a2b040b73a1e27f085e"} Nov 28 17:30:05 crc kubenswrapper[5024]: I1128 17:30:05.206880 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11311cd92ed04352130753506ba7b6f08a19cfb46b9e0a2b040b73a1e27f085e" Nov 28 17:30:05 crc kubenswrapper[5024]: I1128 17:30:05.206929 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn" Nov 28 17:30:14 crc kubenswrapper[5024]: I1128 17:30:14.498659 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:30:15 crc kubenswrapper[5024]: I1128 17:30:15.339805 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"b6b772564713d0d8deeb50543a5adf26c834290c0e443e8b7a14e2ddc0070fe5"} Nov 28 17:30:54 crc kubenswrapper[5024]: I1128 17:30:54.089420 5024 scope.go:117] "RemoveContainer" containerID="665af5b5bcc53c6c5bd3b3f56acdf863e1c6ee4ebb549976b673621475e62806" Nov 28 17:30:54 crc kubenswrapper[5024]: I1128 17:30:54.122692 5024 scope.go:117] "RemoveContainer" containerID="41f54bb3994ff210f5c35e67e4d9fef570e3a33050e2834a5ab6fd219d60ca35" Nov 28 17:30:54 crc kubenswrapper[5024]: I1128 17:30:54.148952 5024 scope.go:117] "RemoveContainer" containerID="075eda0a4905110118d2d5c317ad91f7289a2bf3b0e58aa9b27513d844ae66d4" Nov 28 17:30:54 crc kubenswrapper[5024]: I1128 17:30:54.199364 5024 scope.go:117] "RemoveContainer" containerID="d24d2d82c1369b629307b48e13f1ad08aa07f83436444fdd9e65519fa3729976" Nov 28 17:31:13 crc kubenswrapper[5024]: I1128 17:31:13.065476 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8rngl"] Nov 28 17:31:13 crc kubenswrapper[5024]: I1128 17:31:13.084177 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-4zgdn"] Nov 28 17:31:13 crc kubenswrapper[5024]: I1128 17:31:13.099404 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bb6d-account-create-update-4b8k6"] Nov 28 17:31:13 crc kubenswrapper[5024]: I1128 17:31:13.110541 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-70f1-account-create-update-slf46"] Nov 28 17:31:13 crc kubenswrapper[5024]: I1128 17:31:13.126984 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8rngl"] Nov 28 17:31:13 crc kubenswrapper[5024]: I1128 17:31:13.142230 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bb6d-account-create-update-4b8k6"] Nov 28 17:31:13 crc kubenswrapper[5024]: I1128 17:31:13.155561 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-70f1-account-create-update-slf46"] Nov 28 17:31:13 crc kubenswrapper[5024]: I1128 17:31:13.168707 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-4zgdn"] Nov 28 17:31:14 crc kubenswrapper[5024]: I1128 17:31:14.050081 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-2gsjw"] Nov 28 17:31:14 crc kubenswrapper[5024]: I1128 17:31:14.065875 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-2gsjw"] Nov 28 17:31:14 crc kubenswrapper[5024]: I1128 17:31:14.510936 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2294c836-32c8-47eb-b5de-563fca6deda8" path="/var/lib/kubelet/pods/2294c836-32c8-47eb-b5de-563fca6deda8/volumes" Nov 28 17:31:14 crc kubenswrapper[5024]: I1128 17:31:14.514473 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37deb816-c36f-47c7-9d3a-c7373eabeb1f" path="/var/lib/kubelet/pods/37deb816-c36f-47c7-9d3a-c7373eabeb1f/volumes" Nov 28 17:31:14 crc kubenswrapper[5024]: I1128 17:31:14.515315 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9c6756-7897-48cb-a004-c8bfe09d4520" path="/var/lib/kubelet/pods/9e9c6756-7897-48cb-a004-c8bfe09d4520/volumes" Nov 28 17:31:14 crc kubenswrapper[5024]: I1128 17:31:14.518564 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac813be9-87ac-4fc7-b881-542716b8125d" path="/var/lib/kubelet/pods/ac813be9-87ac-4fc7-b881-542716b8125d/volumes" Nov 28 17:31:14 crc kubenswrapper[5024]: I1128 17:31:14.519260 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b69e2a-d3f0-49f6-badd-92d6a30ba281" path="/var/lib/kubelet/pods/d5b69e2a-d3f0-49f6-badd-92d6a30ba281/volumes" Nov 28 17:31:15 crc kubenswrapper[5024]: I1128 17:31:15.046118 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-6839-account-create-update-qx8bd"] Nov 28 17:31:15 crc kubenswrapper[5024]: I1128 17:31:15.056919 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-6839-account-create-update-qx8bd"] Nov 28 17:31:16 crc kubenswrapper[5024]: I1128 17:31:16.513198 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e284ba5-1197-4d62-8671-b092ab8c8fa7" path="/var/lib/kubelet/pods/1e284ba5-1197-4d62-8671-b092ab8c8fa7/volumes" Nov 28 17:31:23 crc kubenswrapper[5024]: I1128 17:31:23.038936 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gkzs2"] Nov 28 17:31:23 crc kubenswrapper[5024]: I1128 17:31:23.053247 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3307-account-create-update-s4hhz"] Nov 28 17:31:23 crc kubenswrapper[5024]: I1128 17:31:23.065928 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3307-account-create-update-s4hhz"] Nov 28 17:31:23 crc kubenswrapper[5024]: I1128 17:31:23.076862 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gkzs2"] Nov 28 17:31:24 crc kubenswrapper[5024]: I1128 17:31:24.512222 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="730c1e44-786f-4f58-b6fd-bbc27112ed73" path="/var/lib/kubelet/pods/730c1e44-786f-4f58-b6fd-bbc27112ed73/volumes" Nov 28 17:31:24 crc kubenswrapper[5024]: I1128 17:31:24.514865 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c41427-dbc5-4f74-a83d-021976f51327" path="/var/lib/kubelet/pods/e8c41427-dbc5-4f74-a83d-021976f51327/volumes" Nov 28 17:31:26 crc kubenswrapper[5024]: I1128 17:31:26.054378 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-b5ef-account-create-update-ft92t"] Nov 28 17:31:26 crc kubenswrapper[5024]: I1128 17:31:26.069939 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz"] Nov 28 17:31:26 crc kubenswrapper[5024]: I1128 17:31:26.081738 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-b5ef-account-create-update-ft92t"] Nov 28 17:31:26 crc kubenswrapper[5024]: I1128 17:31:26.092734 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-6q8kz"] Nov 28 17:31:26 crc kubenswrapper[5024]: I1128 17:31:26.510628 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="849dfda2-83a5-47f6-aca7-f25ff8136829" path="/var/lib/kubelet/pods/849dfda2-83a5-47f6-aca7-f25ff8136829/volumes" Nov 28 17:31:26 crc kubenswrapper[5024]: I1128 17:31:26.511349 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa0ae71-201a-464c-ad32-6fc693cf3e62" path="/var/lib/kubelet/pods/faa0ae71-201a-464c-ad32-6fc693cf3e62/volumes" Nov 28 17:31:42 crc kubenswrapper[5024]: I1128 17:31:42.032211 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-2117-account-create-update-zwt9d"] Nov 28 17:31:42 crc kubenswrapper[5024]: I1128 17:31:42.044088 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-2117-account-create-update-zwt9d"] Nov 28 17:31:42 crc kubenswrapper[5024]: I1128 17:31:42.535825 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd4b169-ce4b-4b45-969a-7f73011edf61" path="/var/lib/kubelet/pods/6cd4b169-ce4b-4b45-969a-7f73011edf61/volumes" Nov 28 17:31:53 crc kubenswrapper[5024]: I1128 17:31:53.067392 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-d45mr"] Nov 28 17:31:53 crc kubenswrapper[5024]: I1128 17:31:53.084888 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0066-account-create-update-swplb"] Nov 28 17:31:53 crc kubenswrapper[5024]: I1128 17:31:53.101508 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-jmt7n"] Nov 28 17:31:53 crc kubenswrapper[5024]: I1128 17:31:53.112359 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7056-account-create-update-fh7lw"] Nov 28 17:31:53 crc kubenswrapper[5024]: I1128 17:31:53.123945 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-d45mr"] Nov 28 17:31:53 crc kubenswrapper[5024]: I1128 17:31:53.136685 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7c64-account-create-update-67zlr"] Nov 28 17:31:53 crc kubenswrapper[5024]: I1128 17:31:53.148732 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0066-account-create-update-swplb"] Nov 28 17:31:53 crc kubenswrapper[5024]: I1128 17:31:53.159717 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-jmt7n"] Nov 28 17:31:53 crc kubenswrapper[5024]: I1128 17:31:53.171761 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-b99cs"] Nov 28 17:31:53 crc kubenswrapper[5024]: I1128 17:31:53.184574 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-5tcbk"] Nov 28 17:31:53 crc kubenswrapper[5024]: I1128 17:31:53.199154 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7056-account-create-update-fh7lw"] Nov 28 17:31:53 crc kubenswrapper[5024]: I1128 17:31:53.212545 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-b99cs"] Nov 28 17:31:53 crc kubenswrapper[5024]: I1128 17:31:53.228238 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7c64-account-create-update-67zlr"] Nov 28 17:31:53 crc kubenswrapper[5024]: I1128 17:31:53.242624 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-5tcbk"] Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.282510 5024 scope.go:117] "RemoveContainer" containerID="eb58e6e86e5c9bf1ccacf44ebc52dedba9f91145b4496dcdeaf6d21db6861ab9" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.311878 5024 scope.go:117] "RemoveContainer" containerID="f850c24e076d8610ed38159cf3435df6e50c6eb16ff897544655c63c11b33c0a" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.370107 5024 scope.go:117] "RemoveContainer" containerID="e53c2f40c52f3e1a783029846d8a1f534a416fb19302e8176450f34ad4d8e1c1" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.434942 5024 scope.go:117] "RemoveContainer" containerID="64f48d753e331b8cffbccf3a0347aa314e57da4224d458651b2a3bc338fe147d" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.468327 5024 scope.go:117] "RemoveContainer" containerID="46e15d1669f9a19e098a4ea14066a2b4d2ecea5c13070966d30be8c5b603da65" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.514338 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="547243af-e537-4990-ba48-b668f5a87bb7" path="/var/lib/kubelet/pods/547243af-e537-4990-ba48-b668f5a87bb7/volumes" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.515053 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="770b6c25-63f4-4690-9a2e-b64f74e86272" path="/var/lib/kubelet/pods/770b6c25-63f4-4690-9a2e-b64f74e86272/volumes" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.516716 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9848c031-a7cb-4f3e-804b-1142d6ddf3a4" path="/var/lib/kubelet/pods/9848c031-a7cb-4f3e-804b-1142d6ddf3a4/volumes" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.517646 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eff2673-6be4-4fe9-b36d-c7ab184b1a14" path="/var/lib/kubelet/pods/9eff2673-6be4-4fe9-b36d-c7ab184b1a14/volumes" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.519351 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c97879e5-b703-4517-bdef-ff788259266f" path="/var/lib/kubelet/pods/c97879e5-b703-4517-bdef-ff788259266f/volumes" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.520469 5024 scope.go:117] "RemoveContainer" containerID="0df8007f5a4fe9d11a03d235b776517d8607eac5cab61c7251e6f626e1004d2f" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.521647 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9fa01bb-f5e1-437f-b417-f201ad7b2fad" path="/var/lib/kubelet/pods/e9fa01bb-f5e1-437f-b417-f201ad7b2fad/volumes" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.522953 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb03410d-e1f0-4036-81fc-76f81bf76340" path="/var/lib/kubelet/pods/fb03410d-e1f0-4036-81fc-76f81bf76340/volumes" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.581082 5024 scope.go:117] "RemoveContainer" containerID="6f7871d6a2d962d5a9b25bce4cf94999f3876ad3254202326076b5e65b127a66" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.658725 5024 scope.go:117] "RemoveContainer" containerID="2ca0bf6208c29d32dad01e286df5ecfc93bf7cb476a8b650a275be5e21ba6a80" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.684067 5024 scope.go:117] "RemoveContainer" containerID="fc17fef4ebcee3a3cf6546ef0fb903c3827789321d149215834c105ea9d0dcfe" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.714655 5024 scope.go:117] "RemoveContainer" containerID="24d4ef8b20ed1571f4c4b7cc5e3cd031b9e1266734cb1622662c125c385518a9" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.740401 5024 scope.go:117] "RemoveContainer" containerID="1113b52b607e7fe2e78906bb79b9220a97bceb97968caffcdbc3bda892c56303" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.772791 5024 scope.go:117] "RemoveContainer" containerID="539acfcc2901784a05afe01437521097fc8818d2525a233db707fbc55d1fb7a8" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.796060 5024 scope.go:117] "RemoveContainer" containerID="b056391e03d2c5db04e4befcb87f553f9b99d042b4e628aa4aa932e9e1095dc2" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.820480 5024 scope.go:117] "RemoveContainer" containerID="bbd3485a5e04b6e3609d8e627b803d0d59f21578919c16a37761ade5a617ed17" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.845308 5024 scope.go:117] "RemoveContainer" containerID="13afd1e3647203038a7464ee5221ccfbbdd7be4a735ffd09ec1ac782000a2473" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.871732 5024 scope.go:117] "RemoveContainer" containerID="88b9b26a666698ecc9f86da90a1380bdc927d2dd0d6ece467a9f1f7f1c3719f6" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.898173 5024 scope.go:117] "RemoveContainer" containerID="3434faa4421d4c211f09a73519f9b0bcd034235c8ea66e5dafd52eefa8fe0443" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.932118 5024 scope.go:117] "RemoveContainer" containerID="bc0975f5b8227570e94c7468da21a82f6e09d4d10d05b2db1f712c49a8d72a6a" Nov 28 17:31:54 crc kubenswrapper[5024]: I1128 17:31:54.963241 5024 scope.go:117] "RemoveContainer" containerID="8e84bcf72c6fdae9ebeaa642bd7bc9ce3b2433a6cbad8f0f71c3d2e53956de69" Nov 28 17:31:57 crc kubenswrapper[5024]: I1128 17:31:57.027087 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4t4sf"] Nov 28 17:31:57 crc kubenswrapper[5024]: I1128 17:31:57.039993 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4t4sf"] Nov 28 17:31:58 crc kubenswrapper[5024]: I1128 17:31:58.510652 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec4831bb-4252-4d37-83f4-1b9e4f88ea35" path="/var/lib/kubelet/pods/ec4831bb-4252-4d37-83f4-1b9e4f88ea35/volumes" Nov 28 17:32:13 crc kubenswrapper[5024]: I1128 17:32:13.772334 5024 generic.go:334] "Generic (PLEG): container finished" podID="c2e066c9-5f85-4782-9317-546bcc3457e8" containerID="b363bd476f7432c67afc82618ac23192b9434d5ffec7252fd6dbc425da5fe89c" exitCode=0 Nov 28 17:32:13 crc kubenswrapper[5024]: I1128 17:32:13.772617 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" event={"ID":"c2e066c9-5f85-4782-9317-546bcc3457e8","Type":"ContainerDied","Data":"b363bd476f7432c67afc82618ac23192b9434d5ffec7252fd6dbc425da5fe89c"} Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.269258 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.372319 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-inventory\") pod \"c2e066c9-5f85-4782-9317-546bcc3457e8\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.373423 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-bootstrap-combined-ca-bundle\") pod \"c2e066c9-5f85-4782-9317-546bcc3457e8\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.373573 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-ssh-key\") pod \"c2e066c9-5f85-4782-9317-546bcc3457e8\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.373689 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ptrl\" (UniqueName: \"kubernetes.io/projected/c2e066c9-5f85-4782-9317-546bcc3457e8-kube-api-access-2ptrl\") pod \"c2e066c9-5f85-4782-9317-546bcc3457e8\" (UID: \"c2e066c9-5f85-4782-9317-546bcc3457e8\") " Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.378695 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c2e066c9-5f85-4782-9317-546bcc3457e8" (UID: "c2e066c9-5f85-4782-9317-546bcc3457e8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.380052 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2e066c9-5f85-4782-9317-546bcc3457e8-kube-api-access-2ptrl" (OuterVolumeSpecName: "kube-api-access-2ptrl") pod "c2e066c9-5f85-4782-9317-546bcc3457e8" (UID: "c2e066c9-5f85-4782-9317-546bcc3457e8"). InnerVolumeSpecName "kube-api-access-2ptrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.414792 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c2e066c9-5f85-4782-9317-546bcc3457e8" (UID: "c2e066c9-5f85-4782-9317-546bcc3457e8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.415471 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-inventory" (OuterVolumeSpecName: "inventory") pod "c2e066c9-5f85-4782-9317-546bcc3457e8" (UID: "c2e066c9-5f85-4782-9317-546bcc3457e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.476557 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ptrl\" (UniqueName: \"kubernetes.io/projected/c2e066c9-5f85-4782-9317-546bcc3457e8-kube-api-access-2ptrl\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.476590 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.476599 5024 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.476610 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2e066c9-5f85-4782-9317-546bcc3457e8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.794251 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" event={"ID":"c2e066c9-5f85-4782-9317-546bcc3457e8","Type":"ContainerDied","Data":"210ba62d14c725b0212f903502f5cbda17c57937012d1764052fb22b516a11f2"} Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.794551 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="210ba62d14c725b0212f903502f5cbda17c57937012d1764052fb22b516a11f2" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.794291 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.944782 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v"] Nov 28 17:32:15 crc kubenswrapper[5024]: E1128 17:32:15.949661 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac17d18-79a2-48b3-8ea9-e1e84f472a51" containerName="collect-profiles" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.949700 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac17d18-79a2-48b3-8ea9-e1e84f472a51" containerName="collect-profiles" Nov 28 17:32:15 crc kubenswrapper[5024]: E1128 17:32:15.949713 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d05e6a-edfa-4707-959c-c3997debbed1" containerName="extract-utilities" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.949720 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d05e6a-edfa-4707-959c-c3997debbed1" containerName="extract-utilities" Nov 28 17:32:15 crc kubenswrapper[5024]: E1128 17:32:15.949742 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e066c9-5f85-4782-9317-546bcc3457e8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.949751 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e066c9-5f85-4782-9317-546bcc3457e8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 17:32:15 crc kubenswrapper[5024]: E1128 17:32:15.949785 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d05e6a-edfa-4707-959c-c3997debbed1" containerName="registry-server" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.949791 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d05e6a-edfa-4707-959c-c3997debbed1" containerName="registry-server" Nov 28 17:32:15 crc kubenswrapper[5024]: E1128 17:32:15.949813 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d05e6a-edfa-4707-959c-c3997debbed1" containerName="extract-content" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.949820 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d05e6a-edfa-4707-959c-c3997debbed1" containerName="extract-content" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.950089 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac17d18-79a2-48b3-8ea9-e1e84f472a51" containerName="collect-profiles" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.950107 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e066c9-5f85-4782-9317-546bcc3457e8" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.950135 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d05e6a-edfa-4707-959c-c3997debbed1" containerName="registry-server" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.951282 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.954131 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.954667 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.954983 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.955074 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:32:15 crc kubenswrapper[5024]: I1128 17:32:15.958523 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v"] Nov 28 17:32:16 crc kubenswrapper[5024]: I1128 17:32:16.095450 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm5hs\" (UniqueName: \"kubernetes.io/projected/08a39720-1020-466e-9226-0257994b642f-kube-api-access-wm5hs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v\" (UID: \"08a39720-1020-466e-9226-0257994b642f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" Nov 28 17:32:16 crc kubenswrapper[5024]: I1128 17:32:16.095969 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08a39720-1020-466e-9226-0257994b642f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v\" (UID: \"08a39720-1020-466e-9226-0257994b642f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" Nov 28 17:32:16 crc kubenswrapper[5024]: I1128 17:32:16.096037 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08a39720-1020-466e-9226-0257994b642f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v\" (UID: \"08a39720-1020-466e-9226-0257994b642f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" Nov 28 17:32:16 crc kubenswrapper[5024]: I1128 17:32:16.197863 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm5hs\" (UniqueName: \"kubernetes.io/projected/08a39720-1020-466e-9226-0257994b642f-kube-api-access-wm5hs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v\" (UID: \"08a39720-1020-466e-9226-0257994b642f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" Nov 28 17:32:16 crc kubenswrapper[5024]: I1128 17:32:16.198104 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08a39720-1020-466e-9226-0257994b642f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v\" (UID: \"08a39720-1020-466e-9226-0257994b642f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" Nov 28 17:32:16 crc kubenswrapper[5024]: I1128 17:32:16.198159 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08a39720-1020-466e-9226-0257994b642f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v\" (UID: \"08a39720-1020-466e-9226-0257994b642f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" Nov 28 17:32:16 crc kubenswrapper[5024]: I1128 17:32:16.202082 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08a39720-1020-466e-9226-0257994b642f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v\" (UID: \"08a39720-1020-466e-9226-0257994b642f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" Nov 28 17:32:16 crc kubenswrapper[5024]: I1128 17:32:16.202698 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08a39720-1020-466e-9226-0257994b642f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v\" (UID: \"08a39720-1020-466e-9226-0257994b642f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" Nov 28 17:32:16 crc kubenswrapper[5024]: I1128 17:32:16.215976 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm5hs\" (UniqueName: \"kubernetes.io/projected/08a39720-1020-466e-9226-0257994b642f-kube-api-access-wm5hs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v\" (UID: \"08a39720-1020-466e-9226-0257994b642f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" Nov 28 17:32:16 crc kubenswrapper[5024]: I1128 17:32:16.277831 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" Nov 28 17:32:16 crc kubenswrapper[5024]: I1128 17:32:16.837354 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v"] Nov 28 17:32:16 crc kubenswrapper[5024]: I1128 17:32:16.840549 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:32:17 crc kubenswrapper[5024]: I1128 17:32:17.813833 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" event={"ID":"08a39720-1020-466e-9226-0257994b642f","Type":"ContainerStarted","Data":"1c67c3361c2743f02bfbd33e4a27ceacbe9ed37fdaa73b88962c11df1f9a64e8"} Nov 28 17:32:17 crc kubenswrapper[5024]: I1128 17:32:17.814123 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" event={"ID":"08a39720-1020-466e-9226-0257994b642f","Type":"ContainerStarted","Data":"be4c986ee373bae2fa40d059cf6c655ce284a1c44c96d857d2f14901723cd925"} Nov 28 17:32:17 crc kubenswrapper[5024]: I1128 17:32:17.837561 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" podStartSLOduration=2.355724531 podStartE2EDuration="2.837539473s" podCreationTimestamp="2025-11-28 17:32:15 +0000 UTC" firstStartedPulling="2025-11-28 17:32:16.840286063 +0000 UTC m=+2038.889206968" lastFinishedPulling="2025-11-28 17:32:17.322100995 +0000 UTC m=+2039.371021910" observedRunningTime="2025-11-28 17:32:17.827397495 +0000 UTC m=+2039.876318400" watchObservedRunningTime="2025-11-28 17:32:17.837539473 +0000 UTC m=+2039.886460378" Nov 28 17:32:30 crc kubenswrapper[5024]: I1128 17:32:30.047585 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-llgqk"] Nov 28 17:32:30 crc kubenswrapper[5024]: I1128 17:32:30.063573 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-llgqk"] Nov 28 17:32:30 crc kubenswrapper[5024]: I1128 17:32:30.513177 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7446dd9c-45ba-43bc-9160-5f39384e542a" path="/var/lib/kubelet/pods/7446dd9c-45ba-43bc-9160-5f39384e542a/volumes" Nov 28 17:32:37 crc kubenswrapper[5024]: I1128 17:32:37.564788 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:32:37 crc kubenswrapper[5024]: I1128 17:32:37.565554 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:32:43 crc kubenswrapper[5024]: I1128 17:32:43.038416 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tgknw"] Nov 28 17:32:43 crc kubenswrapper[5024]: I1128 17:32:43.051830 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tgknw"] Nov 28 17:32:44 crc kubenswrapper[5024]: I1128 17:32:44.058740 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-l8dtc"] Nov 28 17:32:44 crc kubenswrapper[5024]: I1128 17:32:44.076838 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-l8dtc"] Nov 28 17:32:44 crc kubenswrapper[5024]: I1128 17:32:44.513653 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a0117fc-7c8f-485d-8e97-539af4f3046d" path="/var/lib/kubelet/pods/0a0117fc-7c8f-485d-8e97-539af4f3046d/volumes" Nov 28 17:32:44 crc kubenswrapper[5024]: I1128 17:32:44.514581 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da020492-bf03-4191-aa2b-e335ac55f7b3" path="/var/lib/kubelet/pods/da020492-bf03-4191-aa2b-e335ac55f7b3/volumes" Nov 28 17:32:45 crc kubenswrapper[5024]: I1128 17:32:45.058002 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-8bjh2"] Nov 28 17:32:45 crc kubenswrapper[5024]: I1128 17:32:45.070930 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-8bjh2"] Nov 28 17:32:46 crc kubenswrapper[5024]: I1128 17:32:46.512505 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="914b00e1-817d-4776-ae89-1c824e7410bd" path="/var/lib/kubelet/pods/914b00e1-817d-4776-ae89-1c824e7410bd/volumes" Nov 28 17:32:55 crc kubenswrapper[5024]: I1128 17:32:55.340249 5024 scope.go:117] "RemoveContainer" containerID="ba86c445cf22f980125961ce40e90ded99dd6b5d05e59e90dc3c59cd97d1246d" Nov 28 17:32:55 crc kubenswrapper[5024]: I1128 17:32:55.366034 5024 scope.go:117] "RemoveContainer" containerID="d6e98fcdf95de3cf248a5c6a4ae214279476b78d6a5c6740764948bd57a14405" Nov 28 17:32:55 crc kubenswrapper[5024]: I1128 17:32:55.388649 5024 scope.go:117] "RemoveContainer" containerID="a5fea2759b5f5bce75972ef521aac04466f537832c858e9bee5b8c12be7120b4" Nov 28 17:32:55 crc kubenswrapper[5024]: I1128 17:32:55.457273 5024 scope.go:117] "RemoveContainer" containerID="04429f7cabbc02698fbb0da96ec0f96adb3ac4bb72a4313118de96fcbfeb32e6" Nov 28 17:32:55 crc kubenswrapper[5024]: I1128 17:32:55.499445 5024 scope.go:117] "RemoveContainer" containerID="2c4d613edf3072f57c8bc6853f13895ae065d8064a45ffaa439a82c57141bc86" Nov 28 17:32:55 crc kubenswrapper[5024]: I1128 17:32:55.525851 5024 scope.go:117] "RemoveContainer" containerID="5de478df90ea4389390242e5b868db719cfb6a30e03e75c2867d0200cdacfd01" Nov 28 17:32:55 crc kubenswrapper[5024]: I1128 17:32:55.575512 5024 scope.go:117] "RemoveContainer" containerID="f0695fda48a06b1e114bf03cc4a5508e04945ec1d2529dad5d1148acefb511f0" Nov 28 17:32:55 crc kubenswrapper[5024]: I1128 17:32:55.604852 5024 scope.go:117] "RemoveContainer" containerID="52b8c3267caabff5e0e3c87808dbf2e46ed2d0aefecfad782c8fceb9de009672" Nov 28 17:32:55 crc kubenswrapper[5024]: I1128 17:32:55.674377 5024 scope.go:117] "RemoveContainer" containerID="ae11a0410cd4b8e465d555568845c1d900d38d8a3eb632674eea2086e8a26178" Nov 28 17:32:55 crc kubenswrapper[5024]: I1128 17:32:55.716356 5024 scope.go:117] "RemoveContainer" containerID="5451eaf0bd3116c15054f998cb71f4b5d9f0d39a9396c60ce88d12f529bf4a52" Nov 28 17:32:59 crc kubenswrapper[5024]: I1128 17:32:59.039729 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-bkwj2"] Nov 28 17:32:59 crc kubenswrapper[5024]: I1128 17:32:59.057628 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-bkwj2"] Nov 28 17:33:00 crc kubenswrapper[5024]: I1128 17:33:00.035667 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-ppx6b"] Nov 28 17:33:00 crc kubenswrapper[5024]: I1128 17:33:00.048217 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-ppx6b"] Nov 28 17:33:00 crc kubenswrapper[5024]: I1128 17:33:00.530171 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92cbe84b-cd7a-4f20-8aab-92fd90f0c939" path="/var/lib/kubelet/pods/92cbe84b-cd7a-4f20-8aab-92fd90f0c939/volumes" Nov 28 17:33:00 crc kubenswrapper[5024]: I1128 17:33:00.532171 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6" path="/var/lib/kubelet/pods/c17c2e08-eb13-4f5f-8ff2-91f1b91c6be6/volumes" Nov 28 17:33:07 crc kubenswrapper[5024]: I1128 17:33:07.565305 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:33:07 crc kubenswrapper[5024]: I1128 17:33:07.565844 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:33:37 crc kubenswrapper[5024]: I1128 17:33:37.565481 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:33:37 crc kubenswrapper[5024]: I1128 17:33:37.566007 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:33:37 crc kubenswrapper[5024]: I1128 17:33:37.566079 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 17:33:37 crc kubenswrapper[5024]: I1128 17:33:37.566930 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6b772564713d0d8deeb50543a5adf26c834290c0e443e8b7a14e2ddc0070fe5"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:33:37 crc kubenswrapper[5024]: I1128 17:33:37.566982 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://b6b772564713d0d8deeb50543a5adf26c834290c0e443e8b7a14e2ddc0070fe5" gracePeriod=600 Nov 28 17:33:37 crc kubenswrapper[5024]: I1128 17:33:37.771896 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="b6b772564713d0d8deeb50543a5adf26c834290c0e443e8b7a14e2ddc0070fe5" exitCode=0 Nov 28 17:33:37 crc kubenswrapper[5024]: I1128 17:33:37.771960 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"b6b772564713d0d8deeb50543a5adf26c834290c0e443e8b7a14e2ddc0070fe5"} Nov 28 17:33:37 crc kubenswrapper[5024]: I1128 17:33:37.772063 5024 scope.go:117] "RemoveContainer" containerID="d6e4d673589761485abaff6aad6ecad6d0968cd0c32460c318f166233677dd5b" Nov 28 17:33:38 crc kubenswrapper[5024]: I1128 17:33:38.783992 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240"} Nov 28 17:33:55 crc kubenswrapper[5024]: I1128 17:33:55.869540 5024 scope.go:117] "RemoveContainer" containerID="16ddd04424ccdaf052f15899fd9579c200e2dc5ef6bb7c9a3b36fade3093d5dd" Nov 28 17:33:55 crc kubenswrapper[5024]: I1128 17:33:55.897314 5024 scope.go:117] "RemoveContainer" containerID="be0b1636858f531c9152dae25d7e3f478603251ec2aa68ea14b1d021b63cb264" Nov 28 17:34:22 crc kubenswrapper[5024]: I1128 17:34:22.043020 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qqm8c"] Nov 28 17:34:22 crc kubenswrapper[5024]: I1128 17:34:22.055745 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qqm8c"] Nov 28 17:34:22 crc kubenswrapper[5024]: I1128 17:34:22.249106 5024 generic.go:334] "Generic (PLEG): container finished" podID="08a39720-1020-466e-9226-0257994b642f" containerID="1c67c3361c2743f02bfbd33e4a27ceacbe9ed37fdaa73b88962c11df1f9a64e8" exitCode=0 Nov 28 17:34:22 crc kubenswrapper[5024]: I1128 17:34:22.249243 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" event={"ID":"08a39720-1020-466e-9226-0257994b642f","Type":"ContainerDied","Data":"1c67c3361c2743f02bfbd33e4a27ceacbe9ed37fdaa73b88962c11df1f9a64e8"} Nov 28 17:34:22 crc kubenswrapper[5024]: I1128 17:34:22.518422 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c51dd3-21b1-4fdf-a076-64dd49fa10f9" path="/var/lib/kubelet/pods/92c51dd3-21b1-4fdf-a076-64dd49fa10f9/volumes" Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.049157 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-c5b8-account-create-update-p7vd8"] Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.074956 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vpc6d"] Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.100432 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vpc6d"] Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.111402 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-2880-account-create-update-nzw62"] Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.123253 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1d37-account-create-update-ps7zm"] Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.135806 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-c5b8-account-create-update-p7vd8"] Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.147560 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6n8zs"] Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.158276 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1d37-account-create-update-ps7zm"] Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.168596 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6n8zs"] Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.178937 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-2880-account-create-update-nzw62"] Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.792609 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.910075 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08a39720-1020-466e-9226-0257994b642f-inventory\") pod \"08a39720-1020-466e-9226-0257994b642f\" (UID: \"08a39720-1020-466e-9226-0257994b642f\") " Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.910409 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm5hs\" (UniqueName: \"kubernetes.io/projected/08a39720-1020-466e-9226-0257994b642f-kube-api-access-wm5hs\") pod \"08a39720-1020-466e-9226-0257994b642f\" (UID: \"08a39720-1020-466e-9226-0257994b642f\") " Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.910474 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08a39720-1020-466e-9226-0257994b642f-ssh-key\") pod \"08a39720-1020-466e-9226-0257994b642f\" (UID: \"08a39720-1020-466e-9226-0257994b642f\") " Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.916515 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a39720-1020-466e-9226-0257994b642f-kube-api-access-wm5hs" (OuterVolumeSpecName: "kube-api-access-wm5hs") pod "08a39720-1020-466e-9226-0257994b642f" (UID: "08a39720-1020-466e-9226-0257994b642f"). InnerVolumeSpecName "kube-api-access-wm5hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.978933 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a39720-1020-466e-9226-0257994b642f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "08a39720-1020-466e-9226-0257994b642f" (UID: "08a39720-1020-466e-9226-0257994b642f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:34:23 crc kubenswrapper[5024]: I1128 17:34:23.982834 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a39720-1020-466e-9226-0257994b642f-inventory" (OuterVolumeSpecName: "inventory") pod "08a39720-1020-466e-9226-0257994b642f" (UID: "08a39720-1020-466e-9226-0257994b642f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.013715 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08a39720-1020-466e-9226-0257994b642f-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.013747 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm5hs\" (UniqueName: \"kubernetes.io/projected/08a39720-1020-466e-9226-0257994b642f-kube-api-access-wm5hs\") on node \"crc\" DevicePath \"\"" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.013758 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08a39720-1020-466e-9226-0257994b642f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.273960 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" event={"ID":"08a39720-1020-466e-9226-0257994b642f","Type":"ContainerDied","Data":"be4c986ee373bae2fa40d059cf6c655ce284a1c44c96d857d2f14901723cd925"} Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.274324 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be4c986ee373bae2fa40d059cf6c655ce284a1c44c96d857d2f14901723cd925" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.274049 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.361474 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b"] Nov 28 17:34:24 crc kubenswrapper[5024]: E1128 17:34:24.362435 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a39720-1020-466e-9226-0257994b642f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.362531 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a39720-1020-466e-9226-0257994b642f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.362830 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a39720-1020-466e-9226-0257994b642f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.363775 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.366128 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.366375 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.366760 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.374584 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b"] Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.416113 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.512680 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3775a71c-b9bd-4550-b613-113d5eb727d2" path="/var/lib/kubelet/pods/3775a71c-b9bd-4550-b613-113d5eb727d2/volumes" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.513857 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="769e3a29-37e1-4aa5-ae9a-c82e3efe8892" path="/var/lib/kubelet/pods/769e3a29-37e1-4aa5-ae9a-c82e3efe8892/volumes" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.514858 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f17bfb5-bf03-441a-ac54-d1e842049a41" path="/var/lib/kubelet/pods/8f17bfb5-bf03-441a-ac54-d1e842049a41/volumes" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.516000 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c497f27e-01b1-457c-bcf1-dc7652e9f771" path="/var/lib/kubelet/pods/c497f27e-01b1-457c-bcf1-dc7652e9f771/volumes" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.517737 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4" path="/var/lib/kubelet/pods/c6e7cc81-5c55-4f9b-adc4-1e6a7b8885c4/volumes" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.541412 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b\" (UID: \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.541513 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b\" (UID: \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.541665 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f89fm\" (UniqueName: \"kubernetes.io/projected/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-kube-api-access-f89fm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b\" (UID: \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.644077 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f89fm\" (UniqueName: \"kubernetes.io/projected/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-kube-api-access-f89fm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b\" (UID: \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.644514 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b\" (UID: \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.644599 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b\" (UID: \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.649991 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b\" (UID: \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.650590 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b\" (UID: \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.663712 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f89fm\" (UniqueName: \"kubernetes.io/projected/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-kube-api-access-f89fm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b\" (UID: \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" Nov 28 17:34:24 crc kubenswrapper[5024]: I1128 17:34:24.751310 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" Nov 28 17:34:25 crc kubenswrapper[5024]: I1128 17:34:25.397702 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b"] Nov 28 17:34:26 crc kubenswrapper[5024]: I1128 17:34:26.296985 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" event={"ID":"f92e6a57-6a9f-4020-86d0-298a7bf3ad71","Type":"ContainerStarted","Data":"07e56191e18f021767c952843dace833cea48dca723c289bc145e6406b544026"} Nov 28 17:34:26 crc kubenswrapper[5024]: I1128 17:34:26.297789 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" event={"ID":"f92e6a57-6a9f-4020-86d0-298a7bf3ad71","Type":"ContainerStarted","Data":"66d034f12eae69aac041c5fa2815126262adf2fba64da3ae156a5ae22905a9cd"} Nov 28 17:34:26 crc kubenswrapper[5024]: I1128 17:34:26.328880 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" podStartSLOduration=1.748361152 podStartE2EDuration="2.328831008s" podCreationTimestamp="2025-11-28 17:34:24 +0000 UTC" firstStartedPulling="2025-11-28 17:34:25.401711061 +0000 UTC m=+2167.450631966" lastFinishedPulling="2025-11-28 17:34:25.982180917 +0000 UTC m=+2168.031101822" observedRunningTime="2025-11-28 17:34:26.320969665 +0000 UTC m=+2168.369890590" watchObservedRunningTime="2025-11-28 17:34:26.328831008 +0000 UTC m=+2168.377751913" Nov 28 17:34:51 crc kubenswrapper[5024]: I1128 17:34:51.052962 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-phwfj"] Nov 28 17:34:51 crc kubenswrapper[5024]: I1128 17:34:51.065544 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-phwfj"] Nov 28 17:34:52 crc kubenswrapper[5024]: I1128 17:34:52.515769 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd5b297-8471-4749-aa89-a9d163073420" path="/var/lib/kubelet/pods/4dd5b297-8471-4749-aa89-a9d163073420/volumes" Nov 28 17:34:53 crc kubenswrapper[5024]: I1128 17:34:53.033882 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-4beb-account-create-update-dzhlc"] Nov 28 17:34:53 crc kubenswrapper[5024]: I1128 17:34:53.048770 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-jdpnl"] Nov 28 17:34:53 crc kubenswrapper[5024]: I1128 17:34:53.061834 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-jdpnl"] Nov 28 17:34:53 crc kubenswrapper[5024]: I1128 17:34:53.073343 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-4beb-account-create-update-dzhlc"] Nov 28 17:34:54 crc kubenswrapper[5024]: I1128 17:34:54.510589 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c024930-e403-4410-9f11-4c7bc67711cd" path="/var/lib/kubelet/pods/0c024930-e403-4410-9f11-4c7bc67711cd/volumes" Nov 28 17:34:54 crc kubenswrapper[5024]: I1128 17:34:54.511573 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9381983e-e11c-477e-85b0-df124ae29b32" path="/var/lib/kubelet/pods/9381983e-e11c-477e-85b0-df124ae29b32/volumes" Nov 28 17:34:56 crc kubenswrapper[5024]: I1128 17:34:56.031466 5024 scope.go:117] "RemoveContainer" containerID="150b88c3ea39666838ae99091bf8d61811d537eb97993c5c10421521a6d8f2bb" Nov 28 17:34:56 crc kubenswrapper[5024]: I1128 17:34:56.077680 5024 scope.go:117] "RemoveContainer" containerID="a2e85227963a57a756ff140263f9735c3ed88676b2ed6b90161b6addbb2f7492" Nov 28 17:34:56 crc kubenswrapper[5024]: I1128 17:34:56.128355 5024 scope.go:117] "RemoveContainer" containerID="31e037aa45fa58b41822ed8b464db589b4b19b31283b87a28e049271ba80d3b1" Nov 28 17:34:56 crc kubenswrapper[5024]: I1128 17:34:56.244962 5024 scope.go:117] "RemoveContainer" containerID="942f5aa92ef3b38b197eea45f78cb718b68a82262c39402b660a603500e760ca" Nov 28 17:34:56 crc kubenswrapper[5024]: I1128 17:34:56.288572 5024 scope.go:117] "RemoveContainer" containerID="3a4bdbb876528c143524ef09be5662a3cc1195aa59762b9ea5e67ff56e93b6df" Nov 28 17:34:56 crc kubenswrapper[5024]: I1128 17:34:56.325361 5024 scope.go:117] "RemoveContainer" containerID="a1a93464b8e3bd4e46a87aa13e34a300529171ae545357714f907c837ebd51e4" Nov 28 17:34:56 crc kubenswrapper[5024]: I1128 17:34:56.376122 5024 scope.go:117] "RemoveContainer" containerID="a2ff8788c5a5e23f28cf3f3dd480b8ef8711fc1329213fbac3c6f6b6497bfd6b" Nov 28 17:34:56 crc kubenswrapper[5024]: I1128 17:34:56.397868 5024 scope.go:117] "RemoveContainer" containerID="fac2d5267c6d307b8b23bddb0cc5c653211107a572b2fd206b640be03d034e9a" Nov 28 17:34:56 crc kubenswrapper[5024]: I1128 17:34:56.420345 5024 scope.go:117] "RemoveContainer" containerID="dd8bfb9f6a1150e9791594cfedc584c31750a90d4ce2f2bfc8ba3b21b1337d63" Nov 28 17:35:00 crc kubenswrapper[5024]: I1128 17:35:00.899028 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-glb8k"] Nov 28 17:35:00 crc kubenswrapper[5024]: I1128 17:35:00.902296 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:00 crc kubenswrapper[5024]: I1128 17:35:00.911795 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-glb8k"] Nov 28 17:35:01 crc kubenswrapper[5024]: I1128 17:35:01.056574 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-utilities\") pod \"redhat-operators-glb8k\" (UID: \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\") " pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:01 crc kubenswrapper[5024]: I1128 17:35:01.056738 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-catalog-content\") pod \"redhat-operators-glb8k\" (UID: \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\") " pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:01 crc kubenswrapper[5024]: I1128 17:35:01.056828 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68cpv\" (UniqueName: \"kubernetes.io/projected/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-kube-api-access-68cpv\") pod \"redhat-operators-glb8k\" (UID: \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\") " pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:01 crc kubenswrapper[5024]: I1128 17:35:01.159261 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-catalog-content\") pod \"redhat-operators-glb8k\" (UID: \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\") " pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:01 crc kubenswrapper[5024]: I1128 17:35:01.159390 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68cpv\" (UniqueName: \"kubernetes.io/projected/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-kube-api-access-68cpv\") pod \"redhat-operators-glb8k\" (UID: \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\") " pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:01 crc kubenswrapper[5024]: I1128 17:35:01.159496 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-utilities\") pod \"redhat-operators-glb8k\" (UID: \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\") " pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:01 crc kubenswrapper[5024]: I1128 17:35:01.159838 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-catalog-content\") pod \"redhat-operators-glb8k\" (UID: \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\") " pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:01 crc kubenswrapper[5024]: I1128 17:35:01.159857 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-utilities\") pod \"redhat-operators-glb8k\" (UID: \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\") " pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:01 crc kubenswrapper[5024]: I1128 17:35:01.181736 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68cpv\" (UniqueName: \"kubernetes.io/projected/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-kube-api-access-68cpv\") pod \"redhat-operators-glb8k\" (UID: \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\") " pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:01 crc kubenswrapper[5024]: I1128 17:35:01.225826 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:01 crc kubenswrapper[5024]: I1128 17:35:01.701169 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-glb8k"] Nov 28 17:35:02 crc kubenswrapper[5024]: I1128 17:35:02.697529 5024 generic.go:334] "Generic (PLEG): container finished" podID="cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" containerID="c410ebcc0aca3ce450b821ce283cb6f7f512d45648a6da31dbf31cd78b9a207c" exitCode=0 Nov 28 17:35:02 crc kubenswrapper[5024]: I1128 17:35:02.697804 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-glb8k" event={"ID":"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b","Type":"ContainerDied","Data":"c410ebcc0aca3ce450b821ce283cb6f7f512d45648a6da31dbf31cd78b9a207c"} Nov 28 17:35:02 crc kubenswrapper[5024]: I1128 17:35:02.697831 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-glb8k" event={"ID":"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b","Type":"ContainerStarted","Data":"3b627eefb616b1aa1e90e2da9e326bdf240000fadf5ee31d0aacbfddc03aec5c"} Nov 28 17:35:04 crc kubenswrapper[5024]: I1128 17:35:04.718555 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-glb8k" event={"ID":"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b","Type":"ContainerStarted","Data":"171b01f5bd24801c84b605990a316f2e67ae5bbe7abef5ad2b6874150bbbf146"} Nov 28 17:35:07 crc kubenswrapper[5024]: I1128 17:35:07.809770 5024 generic.go:334] "Generic (PLEG): container finished" podID="cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" containerID="171b01f5bd24801c84b605990a316f2e67ae5bbe7abef5ad2b6874150bbbf146" exitCode=0 Nov 28 17:35:07 crc kubenswrapper[5024]: I1128 17:35:07.810282 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-glb8k" event={"ID":"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b","Type":"ContainerDied","Data":"171b01f5bd24801c84b605990a316f2e67ae5bbe7abef5ad2b6874150bbbf146"} Nov 28 17:35:08 crc kubenswrapper[5024]: I1128 17:35:08.828408 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-glb8k" event={"ID":"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b","Type":"ContainerStarted","Data":"19719c4275e82498daaca714b69f916d5d4a6c249e407ac8715be46e14d132b7"} Nov 28 17:35:08 crc kubenswrapper[5024]: I1128 17:35:08.846729 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-glb8k" podStartSLOduration=3.121344273 podStartE2EDuration="8.846712362s" podCreationTimestamp="2025-11-28 17:35:00 +0000 UTC" firstStartedPulling="2025-11-28 17:35:02.699417954 +0000 UTC m=+2204.748338859" lastFinishedPulling="2025-11-28 17:35:08.424786043 +0000 UTC m=+2210.473706948" observedRunningTime="2025-11-28 17:35:08.845577929 +0000 UTC m=+2210.894498834" watchObservedRunningTime="2025-11-28 17:35:08.846712362 +0000 UTC m=+2210.895633257" Nov 28 17:35:11 crc kubenswrapper[5024]: I1128 17:35:11.226073 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:11 crc kubenswrapper[5024]: I1128 17:35:11.226127 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:12 crc kubenswrapper[5024]: I1128 17:35:12.275135 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-glb8k" podUID="cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" containerName="registry-server" probeResult="failure" output=< Nov 28 17:35:12 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 17:35:12 crc kubenswrapper[5024]: > Nov 28 17:35:21 crc kubenswrapper[5024]: I1128 17:35:21.044457 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-b77gw"] Nov 28 17:35:21 crc kubenswrapper[5024]: I1128 17:35:21.054857 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-b77gw"] Nov 28 17:35:21 crc kubenswrapper[5024]: I1128 17:35:21.277861 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:21 crc kubenswrapper[5024]: I1128 17:35:21.330212 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:21 crc kubenswrapper[5024]: I1128 17:35:21.522073 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-glb8k"] Nov 28 17:35:22 crc kubenswrapper[5024]: I1128 17:35:22.032494 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j7k6k"] Nov 28 17:35:22 crc kubenswrapper[5024]: I1128 17:35:22.045936 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j7k6k"] Nov 28 17:35:22 crc kubenswrapper[5024]: I1128 17:35:22.517416 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c514f3-354d-4254-aea0-821b23140252" path="/var/lib/kubelet/pods/41c514f3-354d-4254-aea0-821b23140252/volumes" Nov 28 17:35:22 crc kubenswrapper[5024]: I1128 17:35:22.518581 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e179cfb9-4f0a-4b45-9c10-ad14432d7fc4" path="/var/lib/kubelet/pods/e179cfb9-4f0a-4b45-9c10-ad14432d7fc4/volumes" Nov 28 17:35:22 crc kubenswrapper[5024]: I1128 17:35:22.968251 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-glb8k" podUID="cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" containerName="registry-server" containerID="cri-o://19719c4275e82498daaca714b69f916d5d4a6c249e407ac8715be46e14d132b7" gracePeriod=2 Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.538521 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.611737 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68cpv\" (UniqueName: \"kubernetes.io/projected/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-kube-api-access-68cpv\") pod \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\" (UID: \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\") " Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.611809 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-utilities\") pod \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\" (UID: \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\") " Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.611893 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-catalog-content\") pod \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\" (UID: \"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b\") " Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.618343 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-utilities" (OuterVolumeSpecName: "utilities") pod "cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" (UID: "cf0ec55a-2e82-4b31-bfb4-327a06c0e54b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.642387 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-kube-api-access-68cpv" (OuterVolumeSpecName: "kube-api-access-68cpv") pod "cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" (UID: "cf0ec55a-2e82-4b31-bfb4-327a06c0e54b"). InnerVolumeSpecName "kube-api-access-68cpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.715597 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68cpv\" (UniqueName: \"kubernetes.io/projected/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-kube-api-access-68cpv\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.715632 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.776967 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" (UID: "cf0ec55a-2e82-4b31-bfb4-327a06c0e54b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.817913 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.987420 5024 generic.go:334] "Generic (PLEG): container finished" podID="cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" containerID="19719c4275e82498daaca714b69f916d5d4a6c249e407ac8715be46e14d132b7" exitCode=0 Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.987463 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-glb8k" event={"ID":"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b","Type":"ContainerDied","Data":"19719c4275e82498daaca714b69f916d5d4a6c249e407ac8715be46e14d132b7"} Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.987493 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-glb8k" event={"ID":"cf0ec55a-2e82-4b31-bfb4-327a06c0e54b","Type":"ContainerDied","Data":"3b627eefb616b1aa1e90e2da9e326bdf240000fadf5ee31d0aacbfddc03aec5c"} Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.987508 5024 scope.go:117] "RemoveContainer" containerID="19719c4275e82498daaca714b69f916d5d4a6c249e407ac8715be46e14d132b7" Nov 28 17:35:23 crc kubenswrapper[5024]: I1128 17:35:23.987649 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-glb8k" Nov 28 17:35:24 crc kubenswrapper[5024]: I1128 17:35:24.076058 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-glb8k"] Nov 28 17:35:24 crc kubenswrapper[5024]: I1128 17:35:24.079826 5024 scope.go:117] "RemoveContainer" containerID="171b01f5bd24801c84b605990a316f2e67ae5bbe7abef5ad2b6874150bbbf146" Nov 28 17:35:24 crc kubenswrapper[5024]: I1128 17:35:24.115350 5024 scope.go:117] "RemoveContainer" containerID="c410ebcc0aca3ce450b821ce283cb6f7f512d45648a6da31dbf31cd78b9a207c" Nov 28 17:35:24 crc kubenswrapper[5024]: I1128 17:35:24.179834 5024 scope.go:117] "RemoveContainer" containerID="19719c4275e82498daaca714b69f916d5d4a6c249e407ac8715be46e14d132b7" Nov 28 17:35:24 crc kubenswrapper[5024]: E1128 17:35:24.180798 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19719c4275e82498daaca714b69f916d5d4a6c249e407ac8715be46e14d132b7\": container with ID starting with 19719c4275e82498daaca714b69f916d5d4a6c249e407ac8715be46e14d132b7 not found: ID does not exist" containerID="19719c4275e82498daaca714b69f916d5d4a6c249e407ac8715be46e14d132b7" Nov 28 17:35:24 crc kubenswrapper[5024]: I1128 17:35:24.180849 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19719c4275e82498daaca714b69f916d5d4a6c249e407ac8715be46e14d132b7"} err="failed to get container status \"19719c4275e82498daaca714b69f916d5d4a6c249e407ac8715be46e14d132b7\": rpc error: code = NotFound desc = could not find container \"19719c4275e82498daaca714b69f916d5d4a6c249e407ac8715be46e14d132b7\": container with ID starting with 19719c4275e82498daaca714b69f916d5d4a6c249e407ac8715be46e14d132b7 not found: ID does not exist" Nov 28 17:35:24 crc kubenswrapper[5024]: I1128 17:35:24.180877 5024 scope.go:117] "RemoveContainer" containerID="171b01f5bd24801c84b605990a316f2e67ae5bbe7abef5ad2b6874150bbbf146" Nov 28 17:35:24 crc kubenswrapper[5024]: E1128 17:35:24.181152 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"171b01f5bd24801c84b605990a316f2e67ae5bbe7abef5ad2b6874150bbbf146\": container with ID starting with 171b01f5bd24801c84b605990a316f2e67ae5bbe7abef5ad2b6874150bbbf146 not found: ID does not exist" containerID="171b01f5bd24801c84b605990a316f2e67ae5bbe7abef5ad2b6874150bbbf146" Nov 28 17:35:24 crc kubenswrapper[5024]: I1128 17:35:24.181198 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171b01f5bd24801c84b605990a316f2e67ae5bbe7abef5ad2b6874150bbbf146"} err="failed to get container status \"171b01f5bd24801c84b605990a316f2e67ae5bbe7abef5ad2b6874150bbbf146\": rpc error: code = NotFound desc = could not find container \"171b01f5bd24801c84b605990a316f2e67ae5bbe7abef5ad2b6874150bbbf146\": container with ID starting with 171b01f5bd24801c84b605990a316f2e67ae5bbe7abef5ad2b6874150bbbf146 not found: ID does not exist" Nov 28 17:35:24 crc kubenswrapper[5024]: I1128 17:35:24.181214 5024 scope.go:117] "RemoveContainer" containerID="c410ebcc0aca3ce450b821ce283cb6f7f512d45648a6da31dbf31cd78b9a207c" Nov 28 17:35:24 crc kubenswrapper[5024]: E1128 17:35:24.182576 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c410ebcc0aca3ce450b821ce283cb6f7f512d45648a6da31dbf31cd78b9a207c\": container with ID starting with c410ebcc0aca3ce450b821ce283cb6f7f512d45648a6da31dbf31cd78b9a207c not found: ID does not exist" containerID="c410ebcc0aca3ce450b821ce283cb6f7f512d45648a6da31dbf31cd78b9a207c" Nov 28 17:35:24 crc kubenswrapper[5024]: I1128 17:35:24.182596 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c410ebcc0aca3ce450b821ce283cb6f7f512d45648a6da31dbf31cd78b9a207c"} err="failed to get container status \"c410ebcc0aca3ce450b821ce283cb6f7f512d45648a6da31dbf31cd78b9a207c\": rpc error: code = NotFound desc = could not find container \"c410ebcc0aca3ce450b821ce283cb6f7f512d45648a6da31dbf31cd78b9a207c\": container with ID starting with c410ebcc0aca3ce450b821ce283cb6f7f512d45648a6da31dbf31cd78b9a207c not found: ID does not exist" Nov 28 17:35:24 crc kubenswrapper[5024]: I1128 17:35:24.189386 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-glb8k"] Nov 28 17:35:24 crc kubenswrapper[5024]: E1128 17:35:24.318706 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf0ec55a_2e82_4b31_bfb4_327a06c0e54b.slice/crio-3b627eefb616b1aa1e90e2da9e326bdf240000fadf5ee31d0aacbfddc03aec5c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf0ec55a_2e82_4b31_bfb4_327a06c0e54b.slice\": RecentStats: unable to find data in memory cache]" Nov 28 17:35:24 crc kubenswrapper[5024]: I1128 17:35:24.512195 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" path="/var/lib/kubelet/pods/cf0ec55a-2e82-4b31-bfb4-327a06c0e54b/volumes" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.526491 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dk5n8"] Nov 28 17:35:33 crc kubenswrapper[5024]: E1128 17:35:33.528680 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" containerName="registry-server" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.528797 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" containerName="registry-server" Nov 28 17:35:33 crc kubenswrapper[5024]: E1128 17:35:33.528921 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" containerName="extract-content" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.529001 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" containerName="extract-content" Nov 28 17:35:33 crc kubenswrapper[5024]: E1128 17:35:33.529176 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" containerName="extract-utilities" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.529262 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" containerName="extract-utilities" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.549192 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0ec55a-2e82-4b31-bfb4-327a06c0e54b" containerName="registry-server" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.571957 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dk5n8"] Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.572107 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.694700 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a540a1fb-d34b-4c55-8262-e355bfc402b7-utilities\") pod \"community-operators-dk5n8\" (UID: \"a540a1fb-d34b-4c55-8262-e355bfc402b7\") " pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.694755 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4z4\" (UniqueName: \"kubernetes.io/projected/a540a1fb-d34b-4c55-8262-e355bfc402b7-kube-api-access-vj4z4\") pod \"community-operators-dk5n8\" (UID: \"a540a1fb-d34b-4c55-8262-e355bfc402b7\") " pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.694898 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a540a1fb-d34b-4c55-8262-e355bfc402b7-catalog-content\") pod \"community-operators-dk5n8\" (UID: \"a540a1fb-d34b-4c55-8262-e355bfc402b7\") " pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.797007 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a540a1fb-d34b-4c55-8262-e355bfc402b7-utilities\") pod \"community-operators-dk5n8\" (UID: \"a540a1fb-d34b-4c55-8262-e355bfc402b7\") " pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.797077 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj4z4\" (UniqueName: \"kubernetes.io/projected/a540a1fb-d34b-4c55-8262-e355bfc402b7-kube-api-access-vj4z4\") pod \"community-operators-dk5n8\" (UID: \"a540a1fb-d34b-4c55-8262-e355bfc402b7\") " pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.797241 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a540a1fb-d34b-4c55-8262-e355bfc402b7-catalog-content\") pod \"community-operators-dk5n8\" (UID: \"a540a1fb-d34b-4c55-8262-e355bfc402b7\") " pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.797855 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a540a1fb-d34b-4c55-8262-e355bfc402b7-catalog-content\") pod \"community-operators-dk5n8\" (UID: \"a540a1fb-d34b-4c55-8262-e355bfc402b7\") " pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.798067 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a540a1fb-d34b-4c55-8262-e355bfc402b7-utilities\") pod \"community-operators-dk5n8\" (UID: \"a540a1fb-d34b-4c55-8262-e355bfc402b7\") " pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.814622 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj4z4\" (UniqueName: \"kubernetes.io/projected/a540a1fb-d34b-4c55-8262-e355bfc402b7-kube-api-access-vj4z4\") pod \"community-operators-dk5n8\" (UID: \"a540a1fb-d34b-4c55-8262-e355bfc402b7\") " pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:33 crc kubenswrapper[5024]: I1128 17:35:33.901914 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:34 crc kubenswrapper[5024]: W1128 17:35:34.469798 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda540a1fb_d34b_4c55_8262_e355bfc402b7.slice/crio-a18c2254b00cafcc7d9c5c2ca5ad7e28a2a6e74a8a96cfc47add745b6c7cfa25 WatchSource:0}: Error finding container a18c2254b00cafcc7d9c5c2ca5ad7e28a2a6e74a8a96cfc47add745b6c7cfa25: Status 404 returned error can't find the container with id a18c2254b00cafcc7d9c5c2ca5ad7e28a2a6e74a8a96cfc47add745b6c7cfa25 Nov 28 17:35:34 crc kubenswrapper[5024]: I1128 17:35:34.471141 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dk5n8"] Nov 28 17:35:35 crc kubenswrapper[5024]: I1128 17:35:35.129377 5024 generic.go:334] "Generic (PLEG): container finished" podID="a540a1fb-d34b-4c55-8262-e355bfc402b7" containerID="ce3a51275fe98b37a9c586244ea42ef03c3cb451fd00cd16f6832102fe7f1112" exitCode=0 Nov 28 17:35:35 crc kubenswrapper[5024]: I1128 17:35:35.129652 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk5n8" event={"ID":"a540a1fb-d34b-4c55-8262-e355bfc402b7","Type":"ContainerDied","Data":"ce3a51275fe98b37a9c586244ea42ef03c3cb451fd00cd16f6832102fe7f1112"} Nov 28 17:35:35 crc kubenswrapper[5024]: I1128 17:35:35.129684 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk5n8" event={"ID":"a540a1fb-d34b-4c55-8262-e355bfc402b7","Type":"ContainerStarted","Data":"a18c2254b00cafcc7d9c5c2ca5ad7e28a2a6e74a8a96cfc47add745b6c7cfa25"} Nov 28 17:35:37 crc kubenswrapper[5024]: I1128 17:35:37.564771 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:35:37 crc kubenswrapper[5024]: I1128 17:35:37.565363 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:35:39 crc kubenswrapper[5024]: I1128 17:35:39.196952 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk5n8" event={"ID":"a540a1fb-d34b-4c55-8262-e355bfc402b7","Type":"ContainerStarted","Data":"7d7894371a3e9f60c10da3419c9c6aa829331c262bb5f9e9b9f4c86355e11dcc"} Nov 28 17:35:40 crc kubenswrapper[5024]: I1128 17:35:40.213390 5024 generic.go:334] "Generic (PLEG): container finished" podID="a540a1fb-d34b-4c55-8262-e355bfc402b7" containerID="7d7894371a3e9f60c10da3419c9c6aa829331c262bb5f9e9b9f4c86355e11dcc" exitCode=0 Nov 28 17:35:40 crc kubenswrapper[5024]: I1128 17:35:40.213460 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk5n8" event={"ID":"a540a1fb-d34b-4c55-8262-e355bfc402b7","Type":"ContainerDied","Data":"7d7894371a3e9f60c10da3419c9c6aa829331c262bb5f9e9b9f4c86355e11dcc"} Nov 28 17:35:41 crc kubenswrapper[5024]: I1128 17:35:41.227483 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk5n8" event={"ID":"a540a1fb-d34b-4c55-8262-e355bfc402b7","Type":"ContainerStarted","Data":"8ac169f0229e95353ed44f6e896afe7d658c2b948902f832a43964a89102b4e9"} Nov 28 17:35:41 crc kubenswrapper[5024]: I1128 17:35:41.250755 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dk5n8" podStartSLOduration=2.71539169 podStartE2EDuration="8.250734686s" podCreationTimestamp="2025-11-28 17:35:33 +0000 UTC" firstStartedPulling="2025-11-28 17:35:35.132525398 +0000 UTC m=+2237.181446313" lastFinishedPulling="2025-11-28 17:35:40.667868404 +0000 UTC m=+2242.716789309" observedRunningTime="2025-11-28 17:35:41.25019443 +0000 UTC m=+2243.299115345" watchObservedRunningTime="2025-11-28 17:35:41.250734686 +0000 UTC m=+2243.299655601" Nov 28 17:35:42 crc kubenswrapper[5024]: I1128 17:35:42.241411 5024 generic.go:334] "Generic (PLEG): container finished" podID="f92e6a57-6a9f-4020-86d0-298a7bf3ad71" containerID="07e56191e18f021767c952843dace833cea48dca723c289bc145e6406b544026" exitCode=0 Nov 28 17:35:42 crc kubenswrapper[5024]: I1128 17:35:42.243168 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" event={"ID":"f92e6a57-6a9f-4020-86d0-298a7bf3ad71","Type":"ContainerDied","Data":"07e56191e18f021767c952843dace833cea48dca723c289bc145e6406b544026"} Nov 28 17:35:43 crc kubenswrapper[5024]: I1128 17:35:43.799864 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" Nov 28 17:35:43 crc kubenswrapper[5024]: I1128 17:35:43.902451 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:43 crc kubenswrapper[5024]: I1128 17:35:43.902498 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:43 crc kubenswrapper[5024]: I1128 17:35:43.941176 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-ssh-key\") pod \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\" (UID: \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\") " Nov 28 17:35:43 crc kubenswrapper[5024]: I1128 17:35:43.941421 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-inventory\") pod \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\" (UID: \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\") " Nov 28 17:35:43 crc kubenswrapper[5024]: I1128 17:35:43.941669 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f89fm\" (UniqueName: \"kubernetes.io/projected/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-kube-api-access-f89fm\") pod \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\" (UID: \"f92e6a57-6a9f-4020-86d0-298a7bf3ad71\") " Nov 28 17:35:43 crc kubenswrapper[5024]: I1128 17:35:43.946778 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-kube-api-access-f89fm" (OuterVolumeSpecName: "kube-api-access-f89fm") pod "f92e6a57-6a9f-4020-86d0-298a7bf3ad71" (UID: "f92e6a57-6a9f-4020-86d0-298a7bf3ad71"). InnerVolumeSpecName "kube-api-access-f89fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:35:43 crc kubenswrapper[5024]: I1128 17:35:43.971911 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:43 crc kubenswrapper[5024]: I1128 17:35:43.987640 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f92e6a57-6a9f-4020-86d0-298a7bf3ad71" (UID: "f92e6a57-6a9f-4020-86d0-298a7bf3ad71"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.008517 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-inventory" (OuterVolumeSpecName: "inventory") pod "f92e6a57-6a9f-4020-86d0-298a7bf3ad71" (UID: "f92e6a57-6a9f-4020-86d0-298a7bf3ad71"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.045394 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.045433 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f89fm\" (UniqueName: \"kubernetes.io/projected/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-kube-api-access-f89fm\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.045446 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f92e6a57-6a9f-4020-86d0-298a7bf3ad71-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.262796 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" event={"ID":"f92e6a57-6a9f-4020-86d0-298a7bf3ad71","Type":"ContainerDied","Data":"66d034f12eae69aac041c5fa2815126262adf2fba64da3ae156a5ae22905a9cd"} Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.262870 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66d034f12eae69aac041c5fa2815126262adf2fba64da3ae156a5ae22905a9cd" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.262826 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.363191 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n"] Nov 28 17:35:44 crc kubenswrapper[5024]: E1128 17:35:44.364049 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92e6a57-6a9f-4020-86d0-298a7bf3ad71" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.364078 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92e6a57-6a9f-4020-86d0-298a7bf3ad71" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.364383 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92e6a57-6a9f-4020-86d0-298a7bf3ad71" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.365607 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.368579 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.368791 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.370187 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.371976 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.396881 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n"] Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.560572 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a43b660d-89bb-407a-8661-654ddda19d22-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4d86n\" (UID: \"a43b660d-89bb-407a-8661-654ddda19d22\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.561064 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a43b660d-89bb-407a-8661-654ddda19d22-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4d86n\" (UID: \"a43b660d-89bb-407a-8661-654ddda19d22\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.561100 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vppq9\" (UniqueName: \"kubernetes.io/projected/a43b660d-89bb-407a-8661-654ddda19d22-kube-api-access-vppq9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4d86n\" (UID: \"a43b660d-89bb-407a-8661-654ddda19d22\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.663117 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a43b660d-89bb-407a-8661-654ddda19d22-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4d86n\" (UID: \"a43b660d-89bb-407a-8661-654ddda19d22\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.663483 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a43b660d-89bb-407a-8661-654ddda19d22-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4d86n\" (UID: \"a43b660d-89bb-407a-8661-654ddda19d22\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.663513 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vppq9\" (UniqueName: \"kubernetes.io/projected/a43b660d-89bb-407a-8661-654ddda19d22-kube-api-access-vppq9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4d86n\" (UID: \"a43b660d-89bb-407a-8661-654ddda19d22\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.671762 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a43b660d-89bb-407a-8661-654ddda19d22-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4d86n\" (UID: \"a43b660d-89bb-407a-8661-654ddda19d22\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.671792 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a43b660d-89bb-407a-8661-654ddda19d22-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4d86n\" (UID: \"a43b660d-89bb-407a-8661-654ddda19d22\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.682379 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vppq9\" (UniqueName: \"kubernetes.io/projected/a43b660d-89bb-407a-8661-654ddda19d22-kube-api-access-vppq9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4d86n\" (UID: \"a43b660d-89bb-407a-8661-654ddda19d22\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" Nov 28 17:35:44 crc kubenswrapper[5024]: I1128 17:35:44.703642 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" Nov 28 17:35:45 crc kubenswrapper[5024]: I1128 17:35:45.282469 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n"] Nov 28 17:35:45 crc kubenswrapper[5024]: W1128 17:35:45.283691 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda43b660d_89bb_407a_8661_654ddda19d22.slice/crio-b31b12e99e8a351e086b929c00ce2642ef2b79804192d8c18c19851f6955c869 WatchSource:0}: Error finding container b31b12e99e8a351e086b929c00ce2642ef2b79804192d8c18c19851f6955c869: Status 404 returned error can't find the container with id b31b12e99e8a351e086b929c00ce2642ef2b79804192d8c18c19851f6955c869 Nov 28 17:35:46 crc kubenswrapper[5024]: I1128 17:35:46.283489 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" event={"ID":"a43b660d-89bb-407a-8661-654ddda19d22","Type":"ContainerStarted","Data":"b31b12e99e8a351e086b929c00ce2642ef2b79804192d8c18c19851f6955c869"} Nov 28 17:35:47 crc kubenswrapper[5024]: I1128 17:35:47.298333 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" event={"ID":"a43b660d-89bb-407a-8661-654ddda19d22","Type":"ContainerStarted","Data":"8d732d6d73076ed2b837407d8afec5a2d0c69c8f8babdcf3e20979f5da4feb59"} Nov 28 17:35:47 crc kubenswrapper[5024]: I1128 17:35:47.324012 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" podStartSLOduration=2.255588133 podStartE2EDuration="3.323985943s" podCreationTimestamp="2025-11-28 17:35:44 +0000 UTC" firstStartedPulling="2025-11-28 17:35:45.287465144 +0000 UTC m=+2247.336386049" lastFinishedPulling="2025-11-28 17:35:46.355862964 +0000 UTC m=+2248.404783859" observedRunningTime="2025-11-28 17:35:47.319516404 +0000 UTC m=+2249.368437309" watchObservedRunningTime="2025-11-28 17:35:47.323985943 +0000 UTC m=+2249.372906838" Nov 28 17:35:52 crc kubenswrapper[5024]: I1128 17:35:52.354938 5024 generic.go:334] "Generic (PLEG): container finished" podID="a43b660d-89bb-407a-8661-654ddda19d22" containerID="8d732d6d73076ed2b837407d8afec5a2d0c69c8f8babdcf3e20979f5da4feb59" exitCode=0 Nov 28 17:35:52 crc kubenswrapper[5024]: I1128 17:35:52.355139 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" event={"ID":"a43b660d-89bb-407a-8661-654ddda19d22","Type":"ContainerDied","Data":"8d732d6d73076ed2b837407d8afec5a2d0c69c8f8babdcf3e20979f5da4feb59"} Nov 28 17:35:53 crc kubenswrapper[5024]: I1128 17:35:53.834606 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" Nov 28 17:35:53 crc kubenswrapper[5024]: I1128 17:35:53.890196 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vppq9\" (UniqueName: \"kubernetes.io/projected/a43b660d-89bb-407a-8661-654ddda19d22-kube-api-access-vppq9\") pod \"a43b660d-89bb-407a-8661-654ddda19d22\" (UID: \"a43b660d-89bb-407a-8661-654ddda19d22\") " Nov 28 17:35:53 crc kubenswrapper[5024]: I1128 17:35:53.890429 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a43b660d-89bb-407a-8661-654ddda19d22-ssh-key\") pod \"a43b660d-89bb-407a-8661-654ddda19d22\" (UID: \"a43b660d-89bb-407a-8661-654ddda19d22\") " Nov 28 17:35:53 crc kubenswrapper[5024]: I1128 17:35:53.890534 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a43b660d-89bb-407a-8661-654ddda19d22-inventory\") pod \"a43b660d-89bb-407a-8661-654ddda19d22\" (UID: \"a43b660d-89bb-407a-8661-654ddda19d22\") " Nov 28 17:35:53 crc kubenswrapper[5024]: I1128 17:35:53.896341 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43b660d-89bb-407a-8661-654ddda19d22-kube-api-access-vppq9" (OuterVolumeSpecName: "kube-api-access-vppq9") pod "a43b660d-89bb-407a-8661-654ddda19d22" (UID: "a43b660d-89bb-407a-8661-654ddda19d22"). InnerVolumeSpecName "kube-api-access-vppq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:35:53 crc kubenswrapper[5024]: I1128 17:35:53.924963 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43b660d-89bb-407a-8661-654ddda19d22-inventory" (OuterVolumeSpecName: "inventory") pod "a43b660d-89bb-407a-8661-654ddda19d22" (UID: "a43b660d-89bb-407a-8661-654ddda19d22"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:35:53 crc kubenswrapper[5024]: I1128 17:35:53.928373 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43b660d-89bb-407a-8661-654ddda19d22-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a43b660d-89bb-407a-8661-654ddda19d22" (UID: "a43b660d-89bb-407a-8661-654ddda19d22"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:35:53 crc kubenswrapper[5024]: I1128 17:35:53.969165 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dk5n8" Nov 28 17:35:53 crc kubenswrapper[5024]: I1128 17:35:53.993435 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vppq9\" (UniqueName: \"kubernetes.io/projected/a43b660d-89bb-407a-8661-654ddda19d22-kube-api-access-vppq9\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:53 crc kubenswrapper[5024]: I1128 17:35:53.993476 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a43b660d-89bb-407a-8661-654ddda19d22-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:53 crc kubenswrapper[5024]: I1128 17:35:53.993488 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a43b660d-89bb-407a-8661-654ddda19d22-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.056036 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dk5n8"] Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.143606 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pr8z6"] Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.143843 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pr8z6" podUID="9c12a1e9-4dd9-4470-8343-ca7cedab2c34" containerName="registry-server" containerID="cri-o://b6f8dc0ecd2375f371405c9bc6f235ebd81eaf7d0be626449f225b73abd1d30a" gracePeriod=2 Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.376605 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" event={"ID":"a43b660d-89bb-407a-8661-654ddda19d22","Type":"ContainerDied","Data":"b31b12e99e8a351e086b929c00ce2642ef2b79804192d8c18c19851f6955c869"} Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.376649 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b31b12e99e8a351e086b929c00ce2642ef2b79804192d8c18c19851f6955c869" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.376710 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4d86n" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.385047 5024 generic.go:334] "Generic (PLEG): container finished" podID="9c12a1e9-4dd9-4470-8343-ca7cedab2c34" containerID="b6f8dc0ecd2375f371405c9bc6f235ebd81eaf7d0be626449f225b73abd1d30a" exitCode=0 Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.386256 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pr8z6" event={"ID":"9c12a1e9-4dd9-4470-8343-ca7cedab2c34","Type":"ContainerDied","Data":"b6f8dc0ecd2375f371405c9bc6f235ebd81eaf7d0be626449f225b73abd1d30a"} Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.475440 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4"] Nov 28 17:35:54 crc kubenswrapper[5024]: E1128 17:35:54.476183 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43b660d-89bb-407a-8661-654ddda19d22" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.476208 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43b660d-89bb-407a-8661-654ddda19d22" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.476453 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43b660d-89bb-407a-8661-654ddda19d22" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.477554 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.479916 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.480115 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.480319 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.480434 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.524350 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08090ae1-dcb4-4744-8650-c56fcdb30575-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2mxf4\" (UID: \"08090ae1-dcb4-4744-8650-c56fcdb30575\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.524738 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08090ae1-dcb4-4744-8650-c56fcdb30575-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2mxf4\" (UID: \"08090ae1-dcb4-4744-8650-c56fcdb30575\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.524840 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdhx4\" (UniqueName: \"kubernetes.io/projected/08090ae1-dcb4-4744-8650-c56fcdb30575-kube-api-access-cdhx4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2mxf4\" (UID: \"08090ae1-dcb4-4744-8650-c56fcdb30575\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.524404 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4"] Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.627626 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08090ae1-dcb4-4744-8650-c56fcdb30575-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2mxf4\" (UID: \"08090ae1-dcb4-4744-8650-c56fcdb30575\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.627785 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08090ae1-dcb4-4744-8650-c56fcdb30575-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2mxf4\" (UID: \"08090ae1-dcb4-4744-8650-c56fcdb30575\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.627836 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdhx4\" (UniqueName: \"kubernetes.io/projected/08090ae1-dcb4-4744-8650-c56fcdb30575-kube-api-access-cdhx4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2mxf4\" (UID: \"08090ae1-dcb4-4744-8650-c56fcdb30575\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.645914 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08090ae1-dcb4-4744-8650-c56fcdb30575-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2mxf4\" (UID: \"08090ae1-dcb4-4744-8650-c56fcdb30575\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.649360 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdhx4\" (UniqueName: \"kubernetes.io/projected/08090ae1-dcb4-4744-8650-c56fcdb30575-kube-api-access-cdhx4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2mxf4\" (UID: \"08090ae1-dcb4-4744-8650-c56fcdb30575\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.650546 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08090ae1-dcb4-4744-8650-c56fcdb30575-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2mxf4\" (UID: \"08090ae1-dcb4-4744-8650-c56fcdb30575\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.669338 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.729239 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-catalog-content\") pod \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\" (UID: \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\") " Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.729603 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxn4l\" (UniqueName: \"kubernetes.io/projected/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-kube-api-access-xxn4l\") pod \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\" (UID: \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\") " Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.729942 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-utilities\") pod \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\" (UID: \"9c12a1e9-4dd9-4470-8343-ca7cedab2c34\") " Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.731554 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-utilities" (OuterVolumeSpecName: "utilities") pod "9c12a1e9-4dd9-4470-8343-ca7cedab2c34" (UID: "9c12a1e9-4dd9-4470-8343-ca7cedab2c34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.749402 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-kube-api-access-xxn4l" (OuterVolumeSpecName: "kube-api-access-xxn4l") pod "9c12a1e9-4dd9-4470-8343-ca7cedab2c34" (UID: "9c12a1e9-4dd9-4470-8343-ca7cedab2c34"). InnerVolumeSpecName "kube-api-access-xxn4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.801709 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.803997 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c12a1e9-4dd9-4470-8343-ca7cedab2c34" (UID: "9c12a1e9-4dd9-4470-8343-ca7cedab2c34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.833639 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.833673 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:54 crc kubenswrapper[5024]: I1128 17:35:54.833686 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxn4l\" (UniqueName: \"kubernetes.io/projected/9c12a1e9-4dd9-4470-8343-ca7cedab2c34-kube-api-access-xxn4l\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:55 crc kubenswrapper[5024]: I1128 17:35:55.398189 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pr8z6" Nov 28 17:35:55 crc kubenswrapper[5024]: I1128 17:35:55.398147 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pr8z6" event={"ID":"9c12a1e9-4dd9-4470-8343-ca7cedab2c34","Type":"ContainerDied","Data":"3434b4feee76e0e4451e9a70b0657d8c59b875114c2ed1d4a3d8b202da9a4917"} Nov 28 17:35:55 crc kubenswrapper[5024]: I1128 17:35:55.398595 5024 scope.go:117] "RemoveContainer" containerID="b6f8dc0ecd2375f371405c9bc6f235ebd81eaf7d0be626449f225b73abd1d30a" Nov 28 17:35:55 crc kubenswrapper[5024]: I1128 17:35:55.432104 5024 scope.go:117] "RemoveContainer" containerID="b7dccb87d369b223a27ca44796ae2223983a07dcfceb0980890259e7accbc225" Nov 28 17:35:55 crc kubenswrapper[5024]: I1128 17:35:55.446440 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pr8z6"] Nov 28 17:35:55 crc kubenswrapper[5024]: I1128 17:35:55.491001 5024 scope.go:117] "RemoveContainer" containerID="6c52d99ed03db9ce779628f5a4c8811f13848fda154e2387649888ae0b5b1861" Nov 28 17:35:55 crc kubenswrapper[5024]: I1128 17:35:55.495449 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pr8z6"] Nov 28 17:35:55 crc kubenswrapper[5024]: I1128 17:35:55.516178 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4"] Nov 28 17:35:56 crc kubenswrapper[5024]: I1128 17:35:56.408212 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" event={"ID":"08090ae1-dcb4-4744-8650-c56fcdb30575","Type":"ContainerStarted","Data":"c7287e666563f0b8c2936e260af5d2386b79b0b8faa26a3ca9755a35510a9da3"} Nov 28 17:35:56 crc kubenswrapper[5024]: I1128 17:35:56.408495 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" event={"ID":"08090ae1-dcb4-4744-8650-c56fcdb30575","Type":"ContainerStarted","Data":"af26f3e63c6f62ec7cd289dae57928c7f693207d09ae06cee0a8876b9653ac38"} Nov 28 17:35:56 crc kubenswrapper[5024]: I1128 17:35:56.431105 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" podStartSLOduration=1.970263272 podStartE2EDuration="2.431085385s" podCreationTimestamp="2025-11-28 17:35:54 +0000 UTC" firstStartedPulling="2025-11-28 17:35:55.500896051 +0000 UTC m=+2257.549816956" lastFinishedPulling="2025-11-28 17:35:55.961718164 +0000 UTC m=+2258.010639069" observedRunningTime="2025-11-28 17:35:56.425777591 +0000 UTC m=+2258.474698496" watchObservedRunningTime="2025-11-28 17:35:56.431085385 +0000 UTC m=+2258.480006290" Nov 28 17:35:56 crc kubenswrapper[5024]: I1128 17:35:56.513853 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c12a1e9-4dd9-4470-8343-ca7cedab2c34" path="/var/lib/kubelet/pods/9c12a1e9-4dd9-4470-8343-ca7cedab2c34/volumes" Nov 28 17:35:56 crc kubenswrapper[5024]: I1128 17:35:56.637478 5024 scope.go:117] "RemoveContainer" containerID="e69794901f3ecc2a3703449de30328242032fbe02e17b29237a656216d3fd946" Nov 28 17:35:56 crc kubenswrapper[5024]: I1128 17:35:56.697835 5024 scope.go:117] "RemoveContainer" containerID="705d6097270b28f54efe431dc19d441a16e92988d96d63d9b1a3847adc062c0a" Nov 28 17:36:07 crc kubenswrapper[5024]: I1128 17:36:07.565289 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:36:07 crc kubenswrapper[5024]: I1128 17:36:07.565948 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:36:08 crc kubenswrapper[5024]: I1128 17:36:08.041996 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-7b526"] Nov 28 17:36:08 crc kubenswrapper[5024]: I1128 17:36:08.054037 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-7b526"] Nov 28 17:36:08 crc kubenswrapper[5024]: I1128 17:36:08.520975 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08d35aa9-bbf5-406f-98c2-7e884f136b29" path="/var/lib/kubelet/pods/08d35aa9-bbf5-406f-98c2-7e884f136b29/volumes" Nov 28 17:36:32 crc kubenswrapper[5024]: I1128 17:36:32.811940 5024 generic.go:334] "Generic (PLEG): container finished" podID="08090ae1-dcb4-4744-8650-c56fcdb30575" containerID="c7287e666563f0b8c2936e260af5d2386b79b0b8faa26a3ca9755a35510a9da3" exitCode=0 Nov 28 17:36:32 crc kubenswrapper[5024]: I1128 17:36:32.812057 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" event={"ID":"08090ae1-dcb4-4744-8650-c56fcdb30575","Type":"ContainerDied","Data":"c7287e666563f0b8c2936e260af5d2386b79b0b8faa26a3ca9755a35510a9da3"} Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.292317 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.448909 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08090ae1-dcb4-4744-8650-c56fcdb30575-ssh-key\") pod \"08090ae1-dcb4-4744-8650-c56fcdb30575\" (UID: \"08090ae1-dcb4-4744-8650-c56fcdb30575\") " Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.449063 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdhx4\" (UniqueName: \"kubernetes.io/projected/08090ae1-dcb4-4744-8650-c56fcdb30575-kube-api-access-cdhx4\") pod \"08090ae1-dcb4-4744-8650-c56fcdb30575\" (UID: \"08090ae1-dcb4-4744-8650-c56fcdb30575\") " Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.449185 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08090ae1-dcb4-4744-8650-c56fcdb30575-inventory\") pod \"08090ae1-dcb4-4744-8650-c56fcdb30575\" (UID: \"08090ae1-dcb4-4744-8650-c56fcdb30575\") " Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.454091 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08090ae1-dcb4-4744-8650-c56fcdb30575-kube-api-access-cdhx4" (OuterVolumeSpecName: "kube-api-access-cdhx4") pod "08090ae1-dcb4-4744-8650-c56fcdb30575" (UID: "08090ae1-dcb4-4744-8650-c56fcdb30575"). InnerVolumeSpecName "kube-api-access-cdhx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.482216 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08090ae1-dcb4-4744-8650-c56fcdb30575-inventory" (OuterVolumeSpecName: "inventory") pod "08090ae1-dcb4-4744-8650-c56fcdb30575" (UID: "08090ae1-dcb4-4744-8650-c56fcdb30575"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.482830 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08090ae1-dcb4-4744-8650-c56fcdb30575-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "08090ae1-dcb4-4744-8650-c56fcdb30575" (UID: "08090ae1-dcb4-4744-8650-c56fcdb30575"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.552978 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08090ae1-dcb4-4744-8650-c56fcdb30575-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.553293 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdhx4\" (UniqueName: \"kubernetes.io/projected/08090ae1-dcb4-4744-8650-c56fcdb30575-kube-api-access-cdhx4\") on node \"crc\" DevicePath \"\"" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.553304 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08090ae1-dcb4-4744-8650-c56fcdb30575-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.837729 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" event={"ID":"08090ae1-dcb4-4744-8650-c56fcdb30575","Type":"ContainerDied","Data":"af26f3e63c6f62ec7cd289dae57928c7f693207d09ae06cee0a8876b9653ac38"} Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.837771 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af26f3e63c6f62ec7cd289dae57928c7f693207d09ae06cee0a8876b9653ac38" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.838256 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2mxf4" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.937433 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7"] Nov 28 17:36:34 crc kubenswrapper[5024]: E1128 17:36:34.938330 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c12a1e9-4dd9-4470-8343-ca7cedab2c34" containerName="extract-utilities" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.938356 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c12a1e9-4dd9-4470-8343-ca7cedab2c34" containerName="extract-utilities" Nov 28 17:36:34 crc kubenswrapper[5024]: E1128 17:36:34.938395 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08090ae1-dcb4-4744-8650-c56fcdb30575" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.938407 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="08090ae1-dcb4-4744-8650-c56fcdb30575" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:36:34 crc kubenswrapper[5024]: E1128 17:36:34.938466 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c12a1e9-4dd9-4470-8343-ca7cedab2c34" containerName="registry-server" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.938474 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c12a1e9-4dd9-4470-8343-ca7cedab2c34" containerName="registry-server" Nov 28 17:36:34 crc kubenswrapper[5024]: E1128 17:36:34.938493 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c12a1e9-4dd9-4470-8343-ca7cedab2c34" containerName="extract-content" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.938502 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c12a1e9-4dd9-4470-8343-ca7cedab2c34" containerName="extract-content" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.938998 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c12a1e9-4dd9-4470-8343-ca7cedab2c34" containerName="registry-server" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.939038 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="08090ae1-dcb4-4744-8650-c56fcdb30575" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.940402 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.942995 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.944292 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.944454 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.947346 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:36:34 crc kubenswrapper[5024]: I1128 17:36:34.956817 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7"] Nov 28 17:36:35 crc kubenswrapper[5024]: I1128 17:36:35.064436 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acf50993-28ae-470e-a987-d19f7f609d59-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7\" (UID: \"acf50993-28ae-470e-a987-d19f7f609d59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" Nov 28 17:36:35 crc kubenswrapper[5024]: I1128 17:36:35.064540 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vhl7\" (UniqueName: \"kubernetes.io/projected/acf50993-28ae-470e-a987-d19f7f609d59-kube-api-access-6vhl7\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7\" (UID: \"acf50993-28ae-470e-a987-d19f7f609d59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" Nov 28 17:36:35 crc kubenswrapper[5024]: I1128 17:36:35.064726 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acf50993-28ae-470e-a987-d19f7f609d59-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7\" (UID: \"acf50993-28ae-470e-a987-d19f7f609d59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" Nov 28 17:36:35 crc kubenswrapper[5024]: I1128 17:36:35.167382 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acf50993-28ae-470e-a987-d19f7f609d59-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7\" (UID: \"acf50993-28ae-470e-a987-d19f7f609d59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" Nov 28 17:36:35 crc kubenswrapper[5024]: I1128 17:36:35.167532 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acf50993-28ae-470e-a987-d19f7f609d59-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7\" (UID: \"acf50993-28ae-470e-a987-d19f7f609d59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" Nov 28 17:36:35 crc kubenswrapper[5024]: I1128 17:36:35.167635 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vhl7\" (UniqueName: \"kubernetes.io/projected/acf50993-28ae-470e-a987-d19f7f609d59-kube-api-access-6vhl7\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7\" (UID: \"acf50993-28ae-470e-a987-d19f7f609d59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" Nov 28 17:36:35 crc kubenswrapper[5024]: I1128 17:36:35.171810 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acf50993-28ae-470e-a987-d19f7f609d59-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7\" (UID: \"acf50993-28ae-470e-a987-d19f7f609d59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" Nov 28 17:36:35 crc kubenswrapper[5024]: I1128 17:36:35.172218 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acf50993-28ae-470e-a987-d19f7f609d59-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7\" (UID: \"acf50993-28ae-470e-a987-d19f7f609d59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" Nov 28 17:36:35 crc kubenswrapper[5024]: I1128 17:36:35.198599 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vhl7\" (UniqueName: \"kubernetes.io/projected/acf50993-28ae-470e-a987-d19f7f609d59-kube-api-access-6vhl7\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7\" (UID: \"acf50993-28ae-470e-a987-d19f7f609d59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" Nov 28 17:36:35 crc kubenswrapper[5024]: I1128 17:36:35.258070 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" Nov 28 17:36:35 crc kubenswrapper[5024]: I1128 17:36:35.807858 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7"] Nov 28 17:36:35 crc kubenswrapper[5024]: I1128 17:36:35.852296 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" event={"ID":"acf50993-28ae-470e-a987-d19f7f609d59","Type":"ContainerStarted","Data":"ebb023b9a76447b37c2c9d5d3bb1f65503cb8c78a8e680879c26f6cffb5053f8"} Nov 28 17:36:36 crc kubenswrapper[5024]: I1128 17:36:36.865052 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" event={"ID":"acf50993-28ae-470e-a987-d19f7f609d59","Type":"ContainerStarted","Data":"61c06fde8a69e3fdfa87df4d9f3fd2069498b4c0b54dda25a191b9e2bedc0b62"} Nov 28 17:36:36 crc kubenswrapper[5024]: I1128 17:36:36.892048 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" podStartSLOduration=2.41472572 podStartE2EDuration="2.891977458s" podCreationTimestamp="2025-11-28 17:36:34 +0000 UTC" firstStartedPulling="2025-11-28 17:36:35.806364141 +0000 UTC m=+2297.855285036" lastFinishedPulling="2025-11-28 17:36:36.283615869 +0000 UTC m=+2298.332536774" observedRunningTime="2025-11-28 17:36:36.887980923 +0000 UTC m=+2298.936901828" watchObservedRunningTime="2025-11-28 17:36:36.891977458 +0000 UTC m=+2298.940898363" Nov 28 17:36:37 crc kubenswrapper[5024]: I1128 17:36:37.565322 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:36:37 crc kubenswrapper[5024]: I1128 17:36:37.565659 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:36:37 crc kubenswrapper[5024]: I1128 17:36:37.565711 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 17:36:37 crc kubenswrapper[5024]: I1128 17:36:37.566742 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:36:37 crc kubenswrapper[5024]: I1128 17:36:37.566814 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" gracePeriod=600 Nov 28 17:36:37 crc kubenswrapper[5024]: E1128 17:36:37.687902 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:36:37 crc kubenswrapper[5024]: I1128 17:36:37.877469 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" exitCode=0 Nov 28 17:36:37 crc kubenswrapper[5024]: I1128 17:36:37.877555 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240"} Nov 28 17:36:37 crc kubenswrapper[5024]: I1128 17:36:37.877607 5024 scope.go:117] "RemoveContainer" containerID="b6b772564713d0d8deeb50543a5adf26c834290c0e443e8b7a14e2ddc0070fe5" Nov 28 17:36:37 crc kubenswrapper[5024]: I1128 17:36:37.878324 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:36:37 crc kubenswrapper[5024]: E1128 17:36:37.878688 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:36:52 crc kubenswrapper[5024]: I1128 17:36:52.497995 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:36:52 crc kubenswrapper[5024]: E1128 17:36:52.498939 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:36:54 crc kubenswrapper[5024]: I1128 17:36:54.551166 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t5bsm"] Nov 28 17:36:54 crc kubenswrapper[5024]: I1128 17:36:54.554701 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:36:54 crc kubenswrapper[5024]: I1128 17:36:54.565516 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5bsm"] Nov 28 17:36:54 crc kubenswrapper[5024]: I1128 17:36:54.585695 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfmkz\" (UniqueName: \"kubernetes.io/projected/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-kube-api-access-hfmkz\") pod \"redhat-marketplace-t5bsm\" (UID: \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\") " pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:36:54 crc kubenswrapper[5024]: I1128 17:36:54.587199 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-utilities\") pod \"redhat-marketplace-t5bsm\" (UID: \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\") " pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:36:54 crc kubenswrapper[5024]: I1128 17:36:54.587440 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-catalog-content\") pod \"redhat-marketplace-t5bsm\" (UID: \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\") " pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:36:54 crc kubenswrapper[5024]: I1128 17:36:54.689962 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-catalog-content\") pod \"redhat-marketplace-t5bsm\" (UID: \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\") " pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:36:54 crc kubenswrapper[5024]: I1128 17:36:54.690117 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfmkz\" (UniqueName: \"kubernetes.io/projected/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-kube-api-access-hfmkz\") pod \"redhat-marketplace-t5bsm\" (UID: \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\") " pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:36:54 crc kubenswrapper[5024]: I1128 17:36:54.690280 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-utilities\") pod \"redhat-marketplace-t5bsm\" (UID: \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\") " pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:36:54 crc kubenswrapper[5024]: I1128 17:36:54.690542 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-catalog-content\") pod \"redhat-marketplace-t5bsm\" (UID: \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\") " pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:36:54 crc kubenswrapper[5024]: I1128 17:36:54.690870 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-utilities\") pod \"redhat-marketplace-t5bsm\" (UID: \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\") " pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:36:54 crc kubenswrapper[5024]: I1128 17:36:54.719547 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfmkz\" (UniqueName: \"kubernetes.io/projected/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-kube-api-access-hfmkz\") pod \"redhat-marketplace-t5bsm\" (UID: \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\") " pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:36:54 crc kubenswrapper[5024]: I1128 17:36:54.883503 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:36:55 crc kubenswrapper[5024]: W1128 17:36:55.392862 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf67b3086_7f22_4f4a_aea7_4ad73f1c546a.slice/crio-1b34fcd22d6e321340439e82003c509149ba666ade70f69540f14870a0c0a2cd WatchSource:0}: Error finding container 1b34fcd22d6e321340439e82003c509149ba666ade70f69540f14870a0c0a2cd: Status 404 returned error can't find the container with id 1b34fcd22d6e321340439e82003c509149ba666ade70f69540f14870a0c0a2cd Nov 28 17:36:55 crc kubenswrapper[5024]: I1128 17:36:55.400315 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5bsm"] Nov 28 17:36:56 crc kubenswrapper[5024]: I1128 17:36:56.087495 5024 generic.go:334] "Generic (PLEG): container finished" podID="f67b3086-7f22-4f4a-aea7-4ad73f1c546a" containerID="014ea40d6ef5a6bcf284c9e8ad39223f536af4c65487f4a08ea49911f49f4e6d" exitCode=0 Nov 28 17:36:56 crc kubenswrapper[5024]: I1128 17:36:56.087593 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5bsm" event={"ID":"f67b3086-7f22-4f4a-aea7-4ad73f1c546a","Type":"ContainerDied","Data":"014ea40d6ef5a6bcf284c9e8ad39223f536af4c65487f4a08ea49911f49f4e6d"} Nov 28 17:36:56 crc kubenswrapper[5024]: I1128 17:36:56.087758 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5bsm" event={"ID":"f67b3086-7f22-4f4a-aea7-4ad73f1c546a","Type":"ContainerStarted","Data":"1b34fcd22d6e321340439e82003c509149ba666ade70f69540f14870a0c0a2cd"} Nov 28 17:36:56 crc kubenswrapper[5024]: I1128 17:36:56.877846 5024 scope.go:117] "RemoveContainer" containerID="2c7a9cda6a007685e5ac80e5e8141da7c784a50d954ba08517a6a5e5c90f7ec4" Nov 28 17:36:58 crc kubenswrapper[5024]: I1128 17:36:58.111010 5024 generic.go:334] "Generic (PLEG): container finished" podID="f67b3086-7f22-4f4a-aea7-4ad73f1c546a" containerID="9bfb0fb5c562dd2c0c5923b68b7eea297c17753208ac7d4d02ce4d6c6a8ea258" exitCode=0 Nov 28 17:36:58 crc kubenswrapper[5024]: I1128 17:36:58.111183 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5bsm" event={"ID":"f67b3086-7f22-4f4a-aea7-4ad73f1c546a","Type":"ContainerDied","Data":"9bfb0fb5c562dd2c0c5923b68b7eea297c17753208ac7d4d02ce4d6c6a8ea258"} Nov 28 17:36:59 crc kubenswrapper[5024]: I1128 17:36:59.124709 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5bsm" event={"ID":"f67b3086-7f22-4f4a-aea7-4ad73f1c546a","Type":"ContainerStarted","Data":"e873a0adb1b0047d754465771c98cae863712a103c3c5469b3c4c16c3ce21e06"} Nov 28 17:36:59 crc kubenswrapper[5024]: I1128 17:36:59.208693 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t5bsm" podStartSLOduration=2.7342636909999998 podStartE2EDuration="5.20867122s" podCreationTimestamp="2025-11-28 17:36:54 +0000 UTC" firstStartedPulling="2025-11-28 17:36:56.089647764 +0000 UTC m=+2318.138568679" lastFinishedPulling="2025-11-28 17:36:58.564055303 +0000 UTC m=+2320.612976208" observedRunningTime="2025-11-28 17:36:59.151291251 +0000 UTC m=+2321.200212156" watchObservedRunningTime="2025-11-28 17:36:59.20867122 +0000 UTC m=+2321.257592125" Nov 28 17:37:04 crc kubenswrapper[5024]: I1128 17:37:04.884329 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:37:04 crc kubenswrapper[5024]: I1128 17:37:04.885261 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:37:04 crc kubenswrapper[5024]: I1128 17:37:04.941682 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:37:05 crc kubenswrapper[5024]: I1128 17:37:05.257664 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:37:05 crc kubenswrapper[5024]: I1128 17:37:05.304490 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5bsm"] Nov 28 17:37:06 crc kubenswrapper[5024]: I1128 17:37:06.498489 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:37:06 crc kubenswrapper[5024]: E1128 17:37:06.499456 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:37:07 crc kubenswrapper[5024]: I1128 17:37:07.220236 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t5bsm" podUID="f67b3086-7f22-4f4a-aea7-4ad73f1c546a" containerName="registry-server" containerID="cri-o://e873a0adb1b0047d754465771c98cae863712a103c3c5469b3c4c16c3ce21e06" gracePeriod=2 Nov 28 17:37:07 crc kubenswrapper[5024]: I1128 17:37:07.720641 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:37:07 crc kubenswrapper[5024]: I1128 17:37:07.850411 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-utilities\") pod \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\" (UID: \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\") " Nov 28 17:37:07 crc kubenswrapper[5024]: I1128 17:37:07.850544 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfmkz\" (UniqueName: \"kubernetes.io/projected/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-kube-api-access-hfmkz\") pod \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\" (UID: \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\") " Nov 28 17:37:07 crc kubenswrapper[5024]: I1128 17:37:07.850603 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-catalog-content\") pod \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\" (UID: \"f67b3086-7f22-4f4a-aea7-4ad73f1c546a\") " Nov 28 17:37:07 crc kubenswrapper[5024]: I1128 17:37:07.851529 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-utilities" (OuterVolumeSpecName: "utilities") pod "f67b3086-7f22-4f4a-aea7-4ad73f1c546a" (UID: "f67b3086-7f22-4f4a-aea7-4ad73f1c546a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:37:07 crc kubenswrapper[5024]: I1128 17:37:07.855970 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-kube-api-access-hfmkz" (OuterVolumeSpecName: "kube-api-access-hfmkz") pod "f67b3086-7f22-4f4a-aea7-4ad73f1c546a" (UID: "f67b3086-7f22-4f4a-aea7-4ad73f1c546a"). InnerVolumeSpecName "kube-api-access-hfmkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:37:07 crc kubenswrapper[5024]: I1128 17:37:07.868983 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f67b3086-7f22-4f4a-aea7-4ad73f1c546a" (UID: "f67b3086-7f22-4f4a-aea7-4ad73f1c546a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:37:07 crc kubenswrapper[5024]: I1128 17:37:07.954142 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:37:07 crc kubenswrapper[5024]: I1128 17:37:07.954181 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfmkz\" (UniqueName: \"kubernetes.io/projected/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-kube-api-access-hfmkz\") on node \"crc\" DevicePath \"\"" Nov 28 17:37:07 crc kubenswrapper[5024]: I1128 17:37:07.954196 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f67b3086-7f22-4f4a-aea7-4ad73f1c546a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.233254 5024 generic.go:334] "Generic (PLEG): container finished" podID="f67b3086-7f22-4f4a-aea7-4ad73f1c546a" containerID="e873a0adb1b0047d754465771c98cae863712a103c3c5469b3c4c16c3ce21e06" exitCode=0 Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.233312 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5bsm" event={"ID":"f67b3086-7f22-4f4a-aea7-4ad73f1c546a","Type":"ContainerDied","Data":"e873a0adb1b0047d754465771c98cae863712a103c3c5469b3c4c16c3ce21e06"} Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.233337 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5bsm" Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.233355 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5bsm" event={"ID":"f67b3086-7f22-4f4a-aea7-4ad73f1c546a","Type":"ContainerDied","Data":"1b34fcd22d6e321340439e82003c509149ba666ade70f69540f14870a0c0a2cd"} Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.233377 5024 scope.go:117] "RemoveContainer" containerID="e873a0adb1b0047d754465771c98cae863712a103c3c5469b3c4c16c3ce21e06" Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.264631 5024 scope.go:117] "RemoveContainer" containerID="9bfb0fb5c562dd2c0c5923b68b7eea297c17753208ac7d4d02ce4d6c6a8ea258" Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.295764 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5bsm"] Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.296182 5024 scope.go:117] "RemoveContainer" containerID="014ea40d6ef5a6bcf284c9e8ad39223f536af4c65487f4a08ea49911f49f4e6d" Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.380246 5024 scope.go:117] "RemoveContainer" containerID="e873a0adb1b0047d754465771c98cae863712a103c3c5469b3c4c16c3ce21e06" Nov 28 17:37:08 crc kubenswrapper[5024]: E1128 17:37:08.385278 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e873a0adb1b0047d754465771c98cae863712a103c3c5469b3c4c16c3ce21e06\": container with ID starting with e873a0adb1b0047d754465771c98cae863712a103c3c5469b3c4c16c3ce21e06 not found: ID does not exist" containerID="e873a0adb1b0047d754465771c98cae863712a103c3c5469b3c4c16c3ce21e06" Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.385329 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e873a0adb1b0047d754465771c98cae863712a103c3c5469b3c4c16c3ce21e06"} err="failed to get container status \"e873a0adb1b0047d754465771c98cae863712a103c3c5469b3c4c16c3ce21e06\": rpc error: code = NotFound desc = could not find container \"e873a0adb1b0047d754465771c98cae863712a103c3c5469b3c4c16c3ce21e06\": container with ID starting with e873a0adb1b0047d754465771c98cae863712a103c3c5469b3c4c16c3ce21e06 not found: ID does not exist" Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.385354 5024 scope.go:117] "RemoveContainer" containerID="9bfb0fb5c562dd2c0c5923b68b7eea297c17753208ac7d4d02ce4d6c6a8ea258" Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.385876 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5bsm"] Nov 28 17:37:08 crc kubenswrapper[5024]: E1128 17:37:08.389135 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bfb0fb5c562dd2c0c5923b68b7eea297c17753208ac7d4d02ce4d6c6a8ea258\": container with ID starting with 9bfb0fb5c562dd2c0c5923b68b7eea297c17753208ac7d4d02ce4d6c6a8ea258 not found: ID does not exist" containerID="9bfb0fb5c562dd2c0c5923b68b7eea297c17753208ac7d4d02ce4d6c6a8ea258" Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.389195 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bfb0fb5c562dd2c0c5923b68b7eea297c17753208ac7d4d02ce4d6c6a8ea258"} err="failed to get container status \"9bfb0fb5c562dd2c0c5923b68b7eea297c17753208ac7d4d02ce4d6c6a8ea258\": rpc error: code = NotFound desc = could not find container \"9bfb0fb5c562dd2c0c5923b68b7eea297c17753208ac7d4d02ce4d6c6a8ea258\": container with ID starting with 9bfb0fb5c562dd2c0c5923b68b7eea297c17753208ac7d4d02ce4d6c6a8ea258 not found: ID does not exist" Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.389248 5024 scope.go:117] "RemoveContainer" containerID="014ea40d6ef5a6bcf284c9e8ad39223f536af4c65487f4a08ea49911f49f4e6d" Nov 28 17:37:08 crc kubenswrapper[5024]: E1128 17:37:08.389762 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"014ea40d6ef5a6bcf284c9e8ad39223f536af4c65487f4a08ea49911f49f4e6d\": container with ID starting with 014ea40d6ef5a6bcf284c9e8ad39223f536af4c65487f4a08ea49911f49f4e6d not found: ID does not exist" containerID="014ea40d6ef5a6bcf284c9e8ad39223f536af4c65487f4a08ea49911f49f4e6d" Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.389818 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"014ea40d6ef5a6bcf284c9e8ad39223f536af4c65487f4a08ea49911f49f4e6d"} err="failed to get container status \"014ea40d6ef5a6bcf284c9e8ad39223f536af4c65487f4a08ea49911f49f4e6d\": rpc error: code = NotFound desc = could not find container \"014ea40d6ef5a6bcf284c9e8ad39223f536af4c65487f4a08ea49911f49f4e6d\": container with ID starting with 014ea40d6ef5a6bcf284c9e8ad39223f536af4c65487f4a08ea49911f49f4e6d not found: ID does not exist" Nov 28 17:37:08 crc kubenswrapper[5024]: I1128 17:37:08.515424 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f67b3086-7f22-4f4a-aea7-4ad73f1c546a" path="/var/lib/kubelet/pods/f67b3086-7f22-4f4a-aea7-4ad73f1c546a/volumes" Nov 28 17:37:21 crc kubenswrapper[5024]: I1128 17:37:21.499008 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:37:21 crc kubenswrapper[5024]: E1128 17:37:21.499688 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:37:24 crc kubenswrapper[5024]: I1128 17:37:24.441358 5024 generic.go:334] "Generic (PLEG): container finished" podID="acf50993-28ae-470e-a987-d19f7f609d59" containerID="61c06fde8a69e3fdfa87df4d9f3fd2069498b4c0b54dda25a191b9e2bedc0b62" exitCode=0 Nov 28 17:37:24 crc kubenswrapper[5024]: I1128 17:37:24.441444 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" event={"ID":"acf50993-28ae-470e-a987-d19f7f609d59","Type":"ContainerDied","Data":"61c06fde8a69e3fdfa87df4d9f3fd2069498b4c0b54dda25a191b9e2bedc0b62"} Nov 28 17:37:25 crc kubenswrapper[5024]: I1128 17:37:25.915936 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.013967 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vhl7\" (UniqueName: \"kubernetes.io/projected/acf50993-28ae-470e-a987-d19f7f609d59-kube-api-access-6vhl7\") pod \"acf50993-28ae-470e-a987-d19f7f609d59\" (UID: \"acf50993-28ae-470e-a987-d19f7f609d59\") " Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.014438 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acf50993-28ae-470e-a987-d19f7f609d59-ssh-key\") pod \"acf50993-28ae-470e-a987-d19f7f609d59\" (UID: \"acf50993-28ae-470e-a987-d19f7f609d59\") " Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.014635 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acf50993-28ae-470e-a987-d19f7f609d59-inventory\") pod \"acf50993-28ae-470e-a987-d19f7f609d59\" (UID: \"acf50993-28ae-470e-a987-d19f7f609d59\") " Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.021276 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf50993-28ae-470e-a987-d19f7f609d59-kube-api-access-6vhl7" (OuterVolumeSpecName: "kube-api-access-6vhl7") pod "acf50993-28ae-470e-a987-d19f7f609d59" (UID: "acf50993-28ae-470e-a987-d19f7f609d59"). InnerVolumeSpecName "kube-api-access-6vhl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.046320 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf50993-28ae-470e-a987-d19f7f609d59-inventory" (OuterVolumeSpecName: "inventory") pod "acf50993-28ae-470e-a987-d19f7f609d59" (UID: "acf50993-28ae-470e-a987-d19f7f609d59"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.051627 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf50993-28ae-470e-a987-d19f7f609d59-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "acf50993-28ae-470e-a987-d19f7f609d59" (UID: "acf50993-28ae-470e-a987-d19f7f609d59"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.118457 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vhl7\" (UniqueName: \"kubernetes.io/projected/acf50993-28ae-470e-a987-d19f7f609d59-kube-api-access-6vhl7\") on node \"crc\" DevicePath \"\"" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.118524 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acf50993-28ae-470e-a987-d19f7f609d59-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.118537 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acf50993-28ae-470e-a987-d19f7f609d59-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.463211 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" event={"ID":"acf50993-28ae-470e-a987-d19f7f609d59","Type":"ContainerDied","Data":"ebb023b9a76447b37c2c9d5d3bb1f65503cb8c78a8e680879c26f6cffb5053f8"} Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.463256 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.463256 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebb023b9a76447b37c2c9d5d3bb1f65503cb8c78a8e680879c26f6cffb5053f8" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.640770 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rs8rp"] Nov 28 17:37:26 crc kubenswrapper[5024]: E1128 17:37:26.641393 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf50993-28ae-470e-a987-d19f7f609d59" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.641422 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf50993-28ae-470e-a987-d19f7f609d59" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:37:26 crc kubenswrapper[5024]: E1128 17:37:26.641478 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67b3086-7f22-4f4a-aea7-4ad73f1c546a" containerName="extract-utilities" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.641502 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67b3086-7f22-4f4a-aea7-4ad73f1c546a" containerName="extract-utilities" Nov 28 17:37:26 crc kubenswrapper[5024]: E1128 17:37:26.641522 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67b3086-7f22-4f4a-aea7-4ad73f1c546a" containerName="registry-server" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.641531 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67b3086-7f22-4f4a-aea7-4ad73f1c546a" containerName="registry-server" Nov 28 17:37:26 crc kubenswrapper[5024]: E1128 17:37:26.641562 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67b3086-7f22-4f4a-aea7-4ad73f1c546a" containerName="extract-content" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.641574 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67b3086-7f22-4f4a-aea7-4ad73f1c546a" containerName="extract-content" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.642031 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67b3086-7f22-4f4a-aea7-4ad73f1c546a" containerName="registry-server" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.642078 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf50993-28ae-470e-a987-d19f7f609d59" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.643073 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.645885 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.646211 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.646896 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.647518 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.662720 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rs8rp"] Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.837324 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rs8rp\" (UID: \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.837636 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbm8c\" (UniqueName: \"kubernetes.io/projected/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-kube-api-access-xbm8c\") pod \"ssh-known-hosts-edpm-deployment-rs8rp\" (UID: \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.837813 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rs8rp\" (UID: \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.940548 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbm8c\" (UniqueName: \"kubernetes.io/projected/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-kube-api-access-xbm8c\") pod \"ssh-known-hosts-edpm-deployment-rs8rp\" (UID: \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.940626 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rs8rp\" (UID: \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.940768 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rs8rp\" (UID: \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.945504 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rs8rp\" (UID: \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.946491 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rs8rp\" (UID: \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.963317 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbm8c\" (UniqueName: \"kubernetes.io/projected/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-kube-api-access-xbm8c\") pod \"ssh-known-hosts-edpm-deployment-rs8rp\" (UID: \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\") " pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" Nov 28 17:37:26 crc kubenswrapper[5024]: I1128 17:37:26.991308 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" Nov 28 17:37:27 crc kubenswrapper[5024]: I1128 17:37:27.605848 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rs8rp"] Nov 28 17:37:27 crc kubenswrapper[5024]: I1128 17:37:27.607853 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:37:28 crc kubenswrapper[5024]: I1128 17:37:28.517510 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" event={"ID":"41635a67-7e43-4d50-a1d7-57c8d6fe55a7","Type":"ContainerStarted","Data":"29ba085b25c7c74acf83f3b0408a98356891012c0eff46b963097485240d8efe"} Nov 28 17:37:29 crc kubenswrapper[5024]: I1128 17:37:29.527891 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" event={"ID":"41635a67-7e43-4d50-a1d7-57c8d6fe55a7","Type":"ContainerStarted","Data":"ff03cccbc7c9367ac7c44828f167f54d02a7f1389dc8e7d2dc108fb31668a1e0"} Nov 28 17:37:29 crc kubenswrapper[5024]: I1128 17:37:29.555521 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" podStartSLOduration=2.953777529 podStartE2EDuration="3.55549841s" podCreationTimestamp="2025-11-28 17:37:26 +0000 UTC" firstStartedPulling="2025-11-28 17:37:27.607665306 +0000 UTC m=+2349.656586211" lastFinishedPulling="2025-11-28 17:37:28.209386187 +0000 UTC m=+2350.258307092" observedRunningTime="2025-11-28 17:37:29.54520972 +0000 UTC m=+2351.594130645" watchObservedRunningTime="2025-11-28 17:37:29.55549841 +0000 UTC m=+2351.604419315" Nov 28 17:37:35 crc kubenswrapper[5024]: I1128 17:37:35.600830 5024 generic.go:334] "Generic (PLEG): container finished" podID="41635a67-7e43-4d50-a1d7-57c8d6fe55a7" containerID="ff03cccbc7c9367ac7c44828f167f54d02a7f1389dc8e7d2dc108fb31668a1e0" exitCode=0 Nov 28 17:37:35 crc kubenswrapper[5024]: I1128 17:37:35.600937 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" event={"ID":"41635a67-7e43-4d50-a1d7-57c8d6fe55a7","Type":"ContainerDied","Data":"ff03cccbc7c9367ac7c44828f167f54d02a7f1389dc8e7d2dc108fb31668a1e0"} Nov 28 17:37:36 crc kubenswrapper[5024]: I1128 17:37:36.498490 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:37:36 crc kubenswrapper[5024]: E1128 17:37:36.498861 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.085335 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.189404 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-inventory-0\") pod \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\" (UID: \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\") " Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.189479 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbm8c\" (UniqueName: \"kubernetes.io/projected/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-kube-api-access-xbm8c\") pod \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\" (UID: \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\") " Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.189663 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-ssh-key-openstack-edpm-ipam\") pod \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\" (UID: \"41635a67-7e43-4d50-a1d7-57c8d6fe55a7\") " Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.197459 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-kube-api-access-xbm8c" (OuterVolumeSpecName: "kube-api-access-xbm8c") pod "41635a67-7e43-4d50-a1d7-57c8d6fe55a7" (UID: "41635a67-7e43-4d50-a1d7-57c8d6fe55a7"). InnerVolumeSpecName "kube-api-access-xbm8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.233634 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "41635a67-7e43-4d50-a1d7-57c8d6fe55a7" (UID: "41635a67-7e43-4d50-a1d7-57c8d6fe55a7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.234118 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "41635a67-7e43-4d50-a1d7-57c8d6fe55a7" (UID: "41635a67-7e43-4d50-a1d7-57c8d6fe55a7"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.292662 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.292696 5024 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.292711 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbm8c\" (UniqueName: \"kubernetes.io/projected/41635a67-7e43-4d50-a1d7-57c8d6fe55a7-kube-api-access-xbm8c\") on node \"crc\" DevicePath \"\"" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.622476 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" event={"ID":"41635a67-7e43-4d50-a1d7-57c8d6fe55a7","Type":"ContainerDied","Data":"29ba085b25c7c74acf83f3b0408a98356891012c0eff46b963097485240d8efe"} Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.622522 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29ba085b25c7c74acf83f3b0408a98356891012c0eff46b963097485240d8efe" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.622550 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rs8rp" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.789358 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx"] Nov 28 17:37:37 crc kubenswrapper[5024]: E1128 17:37:37.790156 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41635a67-7e43-4d50-a1d7-57c8d6fe55a7" containerName="ssh-known-hosts-edpm-deployment" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.790230 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="41635a67-7e43-4d50-a1d7-57c8d6fe55a7" containerName="ssh-known-hosts-edpm-deployment" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.792176 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="41635a67-7e43-4d50-a1d7-57c8d6fe55a7" containerName="ssh-known-hosts-edpm-deployment" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.793329 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.798449 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.799273 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.799504 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.799665 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.813012 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx"] Nov 28 17:37:37 crc kubenswrapper[5024]: E1128 17:37:37.910678 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41635a67_7e43_4d50_a1d7_57c8d6fe55a7.slice/crio-29ba085b25c7c74acf83f3b0408a98356891012c0eff46b963097485240d8efe\": RecentStats: unable to find data in memory cache]" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.935944 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67dcl\" (UniqueName: \"kubernetes.io/projected/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-kube-api-access-67dcl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2krjx\" (UID: \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.936059 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2krjx\" (UID: \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" Nov 28 17:37:37 crc kubenswrapper[5024]: I1128 17:37:37.936241 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2krjx\" (UID: \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" Nov 28 17:37:38 crc kubenswrapper[5024]: I1128 17:37:38.039156 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67dcl\" (UniqueName: \"kubernetes.io/projected/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-kube-api-access-67dcl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2krjx\" (UID: \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" Nov 28 17:37:38 crc kubenswrapper[5024]: I1128 17:37:38.039285 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2krjx\" (UID: \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" Nov 28 17:37:38 crc kubenswrapper[5024]: I1128 17:37:38.039520 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2krjx\" (UID: \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" Nov 28 17:37:38 crc kubenswrapper[5024]: I1128 17:37:38.045481 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2krjx\" (UID: \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" Nov 28 17:37:38 crc kubenswrapper[5024]: I1128 17:37:38.046499 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2krjx\" (UID: \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" Nov 28 17:37:38 crc kubenswrapper[5024]: I1128 17:37:38.068943 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67dcl\" (UniqueName: \"kubernetes.io/projected/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-kube-api-access-67dcl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2krjx\" (UID: \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" Nov 28 17:37:38 crc kubenswrapper[5024]: I1128 17:37:38.111586 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" Nov 28 17:37:38 crc kubenswrapper[5024]: I1128 17:37:38.682985 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx"] Nov 28 17:37:39 crc kubenswrapper[5024]: I1128 17:37:39.662792 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" event={"ID":"13a4c9e2-93df-4ec3-801a-4674e2ac1f50","Type":"ContainerStarted","Data":"0f493b4dea131e77a0718739ca101163fb92c092c509488b2eeaf2bb8b0a5291"} Nov 28 17:37:39 crc kubenswrapper[5024]: I1128 17:37:39.664565 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" event={"ID":"13a4c9e2-93df-4ec3-801a-4674e2ac1f50","Type":"ContainerStarted","Data":"8c4dcfdadf973fab0e29329e03b2b6deb88b3811043991b79d7386448d67c9b7"} Nov 28 17:37:47 crc kubenswrapper[5024]: I1128 17:37:47.751870 5024 generic.go:334] "Generic (PLEG): container finished" podID="13a4c9e2-93df-4ec3-801a-4674e2ac1f50" containerID="0f493b4dea131e77a0718739ca101163fb92c092c509488b2eeaf2bb8b0a5291" exitCode=0 Nov 28 17:37:47 crc kubenswrapper[5024]: I1128 17:37:47.751973 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" event={"ID":"13a4c9e2-93df-4ec3-801a-4674e2ac1f50","Type":"ContainerDied","Data":"0f493b4dea131e77a0718739ca101163fb92c092c509488b2eeaf2bb8b0a5291"} Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.202122 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.366170 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-ssh-key\") pod \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\" (UID: \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\") " Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.366620 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67dcl\" (UniqueName: \"kubernetes.io/projected/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-kube-api-access-67dcl\") pod \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\" (UID: \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\") " Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.366789 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-inventory\") pod \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\" (UID: \"13a4c9e2-93df-4ec3-801a-4674e2ac1f50\") " Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.373372 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-kube-api-access-67dcl" (OuterVolumeSpecName: "kube-api-access-67dcl") pod "13a4c9e2-93df-4ec3-801a-4674e2ac1f50" (UID: "13a4c9e2-93df-4ec3-801a-4674e2ac1f50"). InnerVolumeSpecName "kube-api-access-67dcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.401347 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-inventory" (OuterVolumeSpecName: "inventory") pod "13a4c9e2-93df-4ec3-801a-4674e2ac1f50" (UID: "13a4c9e2-93df-4ec3-801a-4674e2ac1f50"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.401637 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "13a4c9e2-93df-4ec3-801a-4674e2ac1f50" (UID: "13a4c9e2-93df-4ec3-801a-4674e2ac1f50"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.469742 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.469777 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.469792 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67dcl\" (UniqueName: \"kubernetes.io/projected/13a4c9e2-93df-4ec3-801a-4674e2ac1f50-kube-api-access-67dcl\") on node \"crc\" DevicePath \"\"" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.499283 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:37:49 crc kubenswrapper[5024]: E1128 17:37:49.499590 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.775603 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" event={"ID":"13a4c9e2-93df-4ec3-801a-4674e2ac1f50","Type":"ContainerDied","Data":"8c4dcfdadf973fab0e29329e03b2b6deb88b3811043991b79d7386448d67c9b7"} Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.775645 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c4dcfdadf973fab0e29329e03b2b6deb88b3811043991b79d7386448d67c9b7" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.776112 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2krjx" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.845628 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns"] Nov 28 17:37:49 crc kubenswrapper[5024]: E1128 17:37:49.846282 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a4c9e2-93df-4ec3-801a-4674e2ac1f50" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.846305 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a4c9e2-93df-4ec3-801a-4674e2ac1f50" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.846567 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a4c9e2-93df-4ec3-801a-4674e2ac1f50" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.847529 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.849842 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.851095 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.851285 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.857882 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.862112 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns"] Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.880041 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dce32347-3163-4eaa-8bc8-43e812be9ead-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns\" (UID: \"dce32347-3163-4eaa-8bc8-43e812be9ead\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.880140 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dce32347-3163-4eaa-8bc8-43e812be9ead-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns\" (UID: \"dce32347-3163-4eaa-8bc8-43e812be9ead\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.880251 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxlf\" (UniqueName: \"kubernetes.io/projected/dce32347-3163-4eaa-8bc8-43e812be9ead-kube-api-access-lzxlf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns\" (UID: \"dce32347-3163-4eaa-8bc8-43e812be9ead\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.982928 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzxlf\" (UniqueName: \"kubernetes.io/projected/dce32347-3163-4eaa-8bc8-43e812be9ead-kube-api-access-lzxlf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns\" (UID: \"dce32347-3163-4eaa-8bc8-43e812be9ead\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.983108 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dce32347-3163-4eaa-8bc8-43e812be9ead-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns\" (UID: \"dce32347-3163-4eaa-8bc8-43e812be9ead\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.983168 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dce32347-3163-4eaa-8bc8-43e812be9ead-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns\" (UID: \"dce32347-3163-4eaa-8bc8-43e812be9ead\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.987520 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dce32347-3163-4eaa-8bc8-43e812be9ead-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns\" (UID: \"dce32347-3163-4eaa-8bc8-43e812be9ead\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.997077 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dce32347-3163-4eaa-8bc8-43e812be9ead-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns\" (UID: \"dce32347-3163-4eaa-8bc8-43e812be9ead\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" Nov 28 17:37:49 crc kubenswrapper[5024]: I1128 17:37:49.999160 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzxlf\" (UniqueName: \"kubernetes.io/projected/dce32347-3163-4eaa-8bc8-43e812be9ead-kube-api-access-lzxlf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns\" (UID: \"dce32347-3163-4eaa-8bc8-43e812be9ead\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" Nov 28 17:37:50 crc kubenswrapper[5024]: I1128 17:37:50.178640 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" Nov 28 17:37:50 crc kubenswrapper[5024]: I1128 17:37:50.748119 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns"] Nov 28 17:37:50 crc kubenswrapper[5024]: I1128 17:37:50.786731 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" event={"ID":"dce32347-3163-4eaa-8bc8-43e812be9ead","Type":"ContainerStarted","Data":"6bb51c04f839576d1251411596631ad1fdc65551ad46cd166809dd9ea8335de1"} Nov 28 17:37:51 crc kubenswrapper[5024]: I1128 17:37:51.799176 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" event={"ID":"dce32347-3163-4eaa-8bc8-43e812be9ead","Type":"ContainerStarted","Data":"4d32c981df63d173ced43f901e155b6e82de847e874e449b81315f5391853a7c"} Nov 28 17:37:51 crc kubenswrapper[5024]: I1128 17:37:51.818482 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" podStartSLOduration=2.365020078 podStartE2EDuration="2.818462604s" podCreationTimestamp="2025-11-28 17:37:49 +0000 UTC" firstStartedPulling="2025-11-28 17:37:50.754826649 +0000 UTC m=+2372.803747554" lastFinishedPulling="2025-11-28 17:37:51.208269175 +0000 UTC m=+2373.257190080" observedRunningTime="2025-11-28 17:37:51.815979671 +0000 UTC m=+2373.864900576" watchObservedRunningTime="2025-11-28 17:37:51.818462604 +0000 UTC m=+2373.867383509" Nov 28 17:38:00 crc kubenswrapper[5024]: I1128 17:38:00.517436 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:38:00 crc kubenswrapper[5024]: E1128 17:38:00.521436 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:38:01 crc kubenswrapper[5024]: I1128 17:38:01.910687 5024 generic.go:334] "Generic (PLEG): container finished" podID="dce32347-3163-4eaa-8bc8-43e812be9ead" containerID="4d32c981df63d173ced43f901e155b6e82de847e874e449b81315f5391853a7c" exitCode=0 Nov 28 17:38:01 crc kubenswrapper[5024]: I1128 17:38:01.910799 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" event={"ID":"dce32347-3163-4eaa-8bc8-43e812be9ead","Type":"ContainerDied","Data":"4d32c981df63d173ced43f901e155b6e82de847e874e449b81315f5391853a7c"} Nov 28 17:38:03 crc kubenswrapper[5024]: I1128 17:38:03.466561 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" Nov 28 17:38:03 crc kubenswrapper[5024]: I1128 17:38:03.646425 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dce32347-3163-4eaa-8bc8-43e812be9ead-ssh-key\") pod \"dce32347-3163-4eaa-8bc8-43e812be9ead\" (UID: \"dce32347-3163-4eaa-8bc8-43e812be9ead\") " Nov 28 17:38:03 crc kubenswrapper[5024]: I1128 17:38:03.646859 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzxlf\" (UniqueName: \"kubernetes.io/projected/dce32347-3163-4eaa-8bc8-43e812be9ead-kube-api-access-lzxlf\") pod \"dce32347-3163-4eaa-8bc8-43e812be9ead\" (UID: \"dce32347-3163-4eaa-8bc8-43e812be9ead\") " Nov 28 17:38:03 crc kubenswrapper[5024]: I1128 17:38:03.647405 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dce32347-3163-4eaa-8bc8-43e812be9ead-inventory\") pod \"dce32347-3163-4eaa-8bc8-43e812be9ead\" (UID: \"dce32347-3163-4eaa-8bc8-43e812be9ead\") " Nov 28 17:38:03 crc kubenswrapper[5024]: I1128 17:38:03.668342 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce32347-3163-4eaa-8bc8-43e812be9ead-kube-api-access-lzxlf" (OuterVolumeSpecName: "kube-api-access-lzxlf") pod "dce32347-3163-4eaa-8bc8-43e812be9ead" (UID: "dce32347-3163-4eaa-8bc8-43e812be9ead"). InnerVolumeSpecName "kube-api-access-lzxlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:38:03 crc kubenswrapper[5024]: I1128 17:38:03.692353 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dce32347-3163-4eaa-8bc8-43e812be9ead-inventory" (OuterVolumeSpecName: "inventory") pod "dce32347-3163-4eaa-8bc8-43e812be9ead" (UID: "dce32347-3163-4eaa-8bc8-43e812be9ead"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:38:03 crc kubenswrapper[5024]: I1128 17:38:03.692733 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dce32347-3163-4eaa-8bc8-43e812be9ead-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dce32347-3163-4eaa-8bc8-43e812be9ead" (UID: "dce32347-3163-4eaa-8bc8-43e812be9ead"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:38:03 crc kubenswrapper[5024]: I1128 17:38:03.751130 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dce32347-3163-4eaa-8bc8-43e812be9ead-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:03 crc kubenswrapper[5024]: I1128 17:38:03.751180 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzxlf\" (UniqueName: \"kubernetes.io/projected/dce32347-3163-4eaa-8bc8-43e812be9ead-kube-api-access-lzxlf\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:03 crc kubenswrapper[5024]: I1128 17:38:03.751235 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dce32347-3163-4eaa-8bc8-43e812be9ead-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:03 crc kubenswrapper[5024]: I1128 17:38:03.939400 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" event={"ID":"dce32347-3163-4eaa-8bc8-43e812be9ead","Type":"ContainerDied","Data":"6bb51c04f839576d1251411596631ad1fdc65551ad46cd166809dd9ea8335de1"} Nov 28 17:38:03 crc kubenswrapper[5024]: I1128 17:38:03.939469 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bb51c04f839576d1251411596631ad1fdc65551ad46cd166809dd9ea8335de1" Nov 28 17:38:03 crc kubenswrapper[5024]: I1128 17:38:03.939575 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.047848 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2"] Nov 28 17:38:04 crc kubenswrapper[5024]: E1128 17:38:04.048554 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce32347-3163-4eaa-8bc8-43e812be9ead" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.048583 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce32347-3163-4eaa-8bc8-43e812be9ead" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.048908 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce32347-3163-4eaa-8bc8-43e812be9ead" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.049957 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.056355 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.056523 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.056570 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.056830 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.057063 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.057273 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.057439 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.057585 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.057750 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.084277 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2"] Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.167585 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.167652 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.167680 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.167710 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.167732 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.167755 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.167772 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.167797 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.167823 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.167843 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.167898 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.167923 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.167953 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmhcl\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-kube-api-access-pmhcl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.168086 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.168106 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.168165 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.269653 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmhcl\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-kube-api-access-pmhcl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.269801 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.269826 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.269888 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.269927 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.269960 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.269987 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.270011 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.270060 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.270091 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.270132 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.270357 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.270400 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.270429 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.270512 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.270553 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.274708 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.274826 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.276254 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.276962 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.277105 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.277781 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.277927 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.277950 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.278718 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.279453 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.280256 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.280289 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.280988 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.282294 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.284293 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.286498 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmhcl\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-kube-api-access-pmhcl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.388227 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:04 crc kubenswrapper[5024]: I1128 17:38:04.971282 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2"] Nov 28 17:38:05 crc kubenswrapper[5024]: I1128 17:38:05.962683 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" event={"ID":"9bb41f70-f26c-4ca8-8953-0dad03b77a6a","Type":"ContainerStarted","Data":"78e47b83a4239e52b51d96ac166b573b5b19c4558f496b1e7d74fc97ab8cd1e8"} Nov 28 17:38:05 crc kubenswrapper[5024]: I1128 17:38:05.963223 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" event={"ID":"9bb41f70-f26c-4ca8-8953-0dad03b77a6a","Type":"ContainerStarted","Data":"b41dddf88f84904765aac22ff9714ed2a88ff8c7a41e28c8cf825e8b83327c33"} Nov 28 17:38:05 crc kubenswrapper[5024]: I1128 17:38:05.985769 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" podStartSLOduration=1.467501134 podStartE2EDuration="1.985732854s" podCreationTimestamp="2025-11-28 17:38:04 +0000 UTC" firstStartedPulling="2025-11-28 17:38:04.97273899 +0000 UTC m=+2387.021659895" lastFinishedPulling="2025-11-28 17:38:05.4909707 +0000 UTC m=+2387.539891615" observedRunningTime="2025-11-28 17:38:05.984701994 +0000 UTC m=+2388.033622889" watchObservedRunningTime="2025-11-28 17:38:05.985732854 +0000 UTC m=+2388.034653779" Nov 28 17:38:10 crc kubenswrapper[5024]: I1128 17:38:10.047739 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-hbbrk"] Nov 28 17:38:10 crc kubenswrapper[5024]: I1128 17:38:10.061886 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-hbbrk"] Nov 28 17:38:10 crc kubenswrapper[5024]: I1128 17:38:10.513125 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a7fb5de-075a-4c27-a648-e6762bd7c941" path="/var/lib/kubelet/pods/3a7fb5de-075a-4c27-a648-e6762bd7c941/volumes" Nov 28 17:38:11 crc kubenswrapper[5024]: I1128 17:38:11.498573 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:38:11 crc kubenswrapper[5024]: E1128 17:38:11.499327 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:38:23 crc kubenswrapper[5024]: I1128 17:38:23.498750 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:38:23 crc kubenswrapper[5024]: E1128 17:38:23.499499 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:38:35 crc kubenswrapper[5024]: I1128 17:38:35.498471 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:38:35 crc kubenswrapper[5024]: E1128 17:38:35.499455 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:38:48 crc kubenswrapper[5024]: I1128 17:38:48.048753 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-wmm7h"] Nov 28 17:38:48 crc kubenswrapper[5024]: I1128 17:38:48.059911 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-wmm7h"] Nov 28 17:38:48 crc kubenswrapper[5024]: I1128 17:38:48.509384 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:38:48 crc kubenswrapper[5024]: E1128 17:38:48.511645 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:38:48 crc kubenswrapper[5024]: I1128 17:38:48.527206 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b59921a-033f-454b-afad-20ee4a3481e4" path="/var/lib/kubelet/pods/8b59921a-033f-454b-afad-20ee4a3481e4/volumes" Nov 28 17:38:50 crc kubenswrapper[5024]: I1128 17:38:50.432220 5024 generic.go:334] "Generic (PLEG): container finished" podID="9bb41f70-f26c-4ca8-8953-0dad03b77a6a" containerID="78e47b83a4239e52b51d96ac166b573b5b19c4558f496b1e7d74fc97ab8cd1e8" exitCode=0 Nov 28 17:38:50 crc kubenswrapper[5024]: I1128 17:38:50.432320 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" event={"ID":"9bb41f70-f26c-4ca8-8953-0dad03b77a6a","Type":"ContainerDied","Data":"78e47b83a4239e52b51d96ac166b573b5b19c4558f496b1e7d74fc97ab8cd1e8"} Nov 28 17:38:51 crc kubenswrapper[5024]: I1128 17:38:51.935447 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073256 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073344 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-ovn-combined-ca-bundle\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073442 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-repo-setup-combined-ca-bundle\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073513 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073573 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073611 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073632 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-bootstrap-combined-ca-bundle\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073680 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073717 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-inventory\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073766 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-neutron-metadata-combined-ca-bundle\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073800 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmhcl\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-kube-api-access-pmhcl\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073844 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-telemetry-combined-ca-bundle\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073864 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-ssh-key\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073894 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-libvirt-combined-ca-bundle\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.073975 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-nova-combined-ca-bundle\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.074050 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-telemetry-power-monitoring-combined-ca-bundle\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.080265 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.080442 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.081928 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.081920 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.082068 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.082985 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.083096 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.083990 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.084958 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.085167 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.086443 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.087841 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-kube-api-access-pmhcl" (OuterVolumeSpecName: "kube-api-access-pmhcl") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "kube-api-access-pmhcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.088252 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.093132 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: E1128 17:38:52.111697 5024 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-inventory podName:9bb41f70-f26c-4ca8-8953-0dad03b77a6a nodeName:}" failed. No retries permitted until 2025-11-28 17:38:52.611076321 +0000 UTC m=+2434.659997226 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-inventory") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a") : error deleting /var/lib/kubelet/pods/9bb41f70-f26c-4ca8-8953-0dad03b77a6a/volume-subpaths: remove /var/lib/kubelet/pods/9bb41f70-f26c-4ca8-8953-0dad03b77a6a/volume-subpaths: no such file or directory Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.113574 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176596 5024 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176638 5024 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176697 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176709 5024 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176719 5024 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176731 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176741 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176753 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176763 5024 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176772 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176781 5024 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176790 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmhcl\" (UniqueName: \"kubernetes.io/projected/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-kube-api-access-pmhcl\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176799 5024 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176807 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.176880 5024 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.456619 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" event={"ID":"9bb41f70-f26c-4ca8-8953-0dad03b77a6a","Type":"ContainerDied","Data":"b41dddf88f84904765aac22ff9714ed2a88ff8c7a41e28c8cf825e8b83327c33"} Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.456885 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b41dddf88f84904765aac22ff9714ed2a88ff8c7a41e28c8cf825e8b83327c33" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.456688 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.561413 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv"] Nov 28 17:38:52 crc kubenswrapper[5024]: E1128 17:38:52.562415 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb41f70-f26c-4ca8-8953-0dad03b77a6a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.562437 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb41f70-f26c-4ca8-8953-0dad03b77a6a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.562768 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bb41f70-f26c-4ca8-8953-0dad03b77a6a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.563888 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.566073 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.588843 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv"] Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.700949 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-inventory\") pod \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\" (UID: \"9bb41f70-f26c-4ca8-8953-0dad03b77a6a\") " Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.701624 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.701696 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn6wf\" (UniqueName: \"kubernetes.io/projected/804c2c31-2211-4c96-8f9f-a9c96543d8c7-kube-api-access-qn6wf\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.701821 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.701865 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.702242 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.714866 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-inventory" (OuterVolumeSpecName: "inventory") pod "9bb41f70-f26c-4ca8-8953-0dad03b77a6a" (UID: "9bb41f70-f26c-4ca8-8953-0dad03b77a6a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.807644 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.807755 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn6wf\" (UniqueName: \"kubernetes.io/projected/804c2c31-2211-4c96-8f9f-a9c96543d8c7-kube-api-access-qn6wf\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.807856 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.807898 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.808126 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.808235 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bb41f70-f26c-4ca8-8953-0dad03b77a6a-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.809107 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.812847 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.816011 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.817701 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.846853 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn6wf\" (UniqueName: \"kubernetes.io/projected/804c2c31-2211-4c96-8f9f-a9c96543d8c7-kube-api-access-qn6wf\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhknv\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:52 crc kubenswrapper[5024]: I1128 17:38:52.901664 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:38:53 crc kubenswrapper[5024]: I1128 17:38:53.496570 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv"] Nov 28 17:38:54 crc kubenswrapper[5024]: I1128 17:38:54.482515 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" event={"ID":"804c2c31-2211-4c96-8f9f-a9c96543d8c7","Type":"ContainerStarted","Data":"682a0c4925f8dd688a9a8fb343e826e3a4f75195309061787c543fb4083284da"} Nov 28 17:38:54 crc kubenswrapper[5024]: I1128 17:38:54.483095 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" event={"ID":"804c2c31-2211-4c96-8f9f-a9c96543d8c7","Type":"ContainerStarted","Data":"f025804584f6a065cd7fa659c8260ecbeb97854418ee16bf4eb07cd22c7d964d"} Nov 28 17:38:54 crc kubenswrapper[5024]: I1128 17:38:54.498833 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" podStartSLOduration=1.985714245 podStartE2EDuration="2.498813455s" podCreationTimestamp="2025-11-28 17:38:52 +0000 UTC" firstStartedPulling="2025-11-28 17:38:53.499989775 +0000 UTC m=+2435.548910680" lastFinishedPulling="2025-11-28 17:38:54.013088975 +0000 UTC m=+2436.062009890" observedRunningTime="2025-11-28 17:38:54.496867298 +0000 UTC m=+2436.545788203" watchObservedRunningTime="2025-11-28 17:38:54.498813455 +0000 UTC m=+2436.547734360" Nov 28 17:38:56 crc kubenswrapper[5024]: I1128 17:38:56.990514 5024 scope.go:117] "RemoveContainer" containerID="6ac7b192a69b799d0138dac2087dab1ca84f0c15b5330ee1531c1a191a6d23e2" Nov 28 17:38:57 crc kubenswrapper[5024]: I1128 17:38:57.029761 5024 scope.go:117] "RemoveContainer" containerID="19084581dd9169bc80f9009d33cbd82ae5796b397dfbc75cc95259c9f80a5a6c" Nov 28 17:39:01 crc kubenswrapper[5024]: I1128 17:39:01.498601 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:39:01 crc kubenswrapper[5024]: E1128 17:39:01.499455 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:39:13 crc kubenswrapper[5024]: I1128 17:39:13.498404 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:39:13 crc kubenswrapper[5024]: E1128 17:39:13.499348 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:39:27 crc kubenswrapper[5024]: I1128 17:39:27.498761 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:39:27 crc kubenswrapper[5024]: E1128 17:39:27.499533 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:39:40 crc kubenswrapper[5024]: I1128 17:39:40.497777 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:39:40 crc kubenswrapper[5024]: E1128 17:39:40.498565 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:39:55 crc kubenswrapper[5024]: I1128 17:39:55.498562 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:39:55 crc kubenswrapper[5024]: E1128 17:39:55.499328 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:39:58 crc kubenswrapper[5024]: I1128 17:39:58.157655 5024 generic.go:334] "Generic (PLEG): container finished" podID="804c2c31-2211-4c96-8f9f-a9c96543d8c7" containerID="682a0c4925f8dd688a9a8fb343e826e3a4f75195309061787c543fb4083284da" exitCode=0 Nov 28 17:39:58 crc kubenswrapper[5024]: I1128 17:39:58.157754 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" event={"ID":"804c2c31-2211-4c96-8f9f-a9c96543d8c7","Type":"ContainerDied","Data":"682a0c4925f8dd688a9a8fb343e826e3a4f75195309061787c543fb4083284da"} Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.694707 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.844680 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ovn-combined-ca-bundle\") pod \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.844733 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ovncontroller-config-0\") pod \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.844819 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn6wf\" (UniqueName: \"kubernetes.io/projected/804c2c31-2211-4c96-8f9f-a9c96543d8c7-kube-api-access-qn6wf\") pod \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.844908 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-inventory\") pod \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.845046 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ssh-key\") pod \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\" (UID: \"804c2c31-2211-4c96-8f9f-a9c96543d8c7\") " Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.849867 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "804c2c31-2211-4c96-8f9f-a9c96543d8c7" (UID: "804c2c31-2211-4c96-8f9f-a9c96543d8c7"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.850498 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804c2c31-2211-4c96-8f9f-a9c96543d8c7-kube-api-access-qn6wf" (OuterVolumeSpecName: "kube-api-access-qn6wf") pod "804c2c31-2211-4c96-8f9f-a9c96543d8c7" (UID: "804c2c31-2211-4c96-8f9f-a9c96543d8c7"). InnerVolumeSpecName "kube-api-access-qn6wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.874198 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "804c2c31-2211-4c96-8f9f-a9c96543d8c7" (UID: "804c2c31-2211-4c96-8f9f-a9c96543d8c7"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.879985 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-inventory" (OuterVolumeSpecName: "inventory") pod "804c2c31-2211-4c96-8f9f-a9c96543d8c7" (UID: "804c2c31-2211-4c96-8f9f-a9c96543d8c7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.891281 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "804c2c31-2211-4c96-8f9f-a9c96543d8c7" (UID: "804c2c31-2211-4c96-8f9f-a9c96543d8c7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.947683 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn6wf\" (UniqueName: \"kubernetes.io/projected/804c2c31-2211-4c96-8f9f-a9c96543d8c7-kube-api-access-qn6wf\") on node \"crc\" DevicePath \"\"" Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.947718 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.947729 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.947742 5024 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:39:59 crc kubenswrapper[5024]: I1128 17:39:59.947755 5024 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/804c2c31-2211-4c96-8f9f-a9c96543d8c7-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.180265 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" event={"ID":"804c2c31-2211-4c96-8f9f-a9c96543d8c7","Type":"ContainerDied","Data":"f025804584f6a065cd7fa659c8260ecbeb97854418ee16bf4eb07cd22c7d964d"} Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.180305 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f025804584f6a065cd7fa659c8260ecbeb97854418ee16bf4eb07cd22c7d964d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.180361 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhknv" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.280724 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d"] Nov 28 17:40:00 crc kubenswrapper[5024]: E1128 17:40:00.281390 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804c2c31-2211-4c96-8f9f-a9c96543d8c7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.281411 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="804c2c31-2211-4c96-8f9f-a9c96543d8c7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.281657 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="804c2c31-2211-4c96-8f9f-a9c96543d8c7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.282712 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.285857 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.286166 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.286320 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.286481 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.286511 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.294326 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.304942 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d"] Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.457466 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.457823 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmlpm\" (UniqueName: \"kubernetes.io/projected/a052b839-2b8d-4f97-afc6-29279c78dbdc-kube-api-access-wmlpm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.457890 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.457923 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.457971 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.458034 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.559821 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.559871 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.559930 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.559983 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.560250 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.560313 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmlpm\" (UniqueName: \"kubernetes.io/projected/a052b839-2b8d-4f97-afc6-29279c78dbdc-kube-api-access-wmlpm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.564204 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.564933 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.564956 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.565396 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.566237 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.578266 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmlpm\" (UniqueName: \"kubernetes.io/projected/a052b839-2b8d-4f97-afc6-29279c78dbdc-kube-api-access-wmlpm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:00 crc kubenswrapper[5024]: I1128 17:40:00.605987 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:01 crc kubenswrapper[5024]: I1128 17:40:01.979638 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d"] Nov 28 17:40:01 crc kubenswrapper[5024]: W1128 17:40:01.984413 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda052b839_2b8d_4f97_afc6_29279c78dbdc.slice/crio-13f60a574b5420d99e7a0a8fb7551a3896001672f0f994cdeac2341e1df402ea WatchSource:0}: Error finding container 13f60a574b5420d99e7a0a8fb7551a3896001672f0f994cdeac2341e1df402ea: Status 404 returned error can't find the container with id 13f60a574b5420d99e7a0a8fb7551a3896001672f0f994cdeac2341e1df402ea Nov 28 17:40:02 crc kubenswrapper[5024]: I1128 17:40:02.200481 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" event={"ID":"a052b839-2b8d-4f97-afc6-29279c78dbdc","Type":"ContainerStarted","Data":"13f60a574b5420d99e7a0a8fb7551a3896001672f0f994cdeac2341e1df402ea"} Nov 28 17:40:03 crc kubenswrapper[5024]: I1128 17:40:03.221803 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" event={"ID":"a052b839-2b8d-4f97-afc6-29279c78dbdc","Type":"ContainerStarted","Data":"fb0a19aebdaf1f0817363a0c5068b27cde10e7c4a5b956b0659ff3767f073174"} Nov 28 17:40:03 crc kubenswrapper[5024]: I1128 17:40:03.275921 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" podStartSLOduration=2.619851248 podStartE2EDuration="3.275898656s" podCreationTimestamp="2025-11-28 17:40:00 +0000 UTC" firstStartedPulling="2025-11-28 17:40:01.98712878 +0000 UTC m=+2504.036049685" lastFinishedPulling="2025-11-28 17:40:02.643176188 +0000 UTC m=+2504.692097093" observedRunningTime="2025-11-28 17:40:03.261737038 +0000 UTC m=+2505.310657943" watchObservedRunningTime="2025-11-28 17:40:03.275898656 +0000 UTC m=+2505.324819561" Nov 28 17:40:04 crc kubenswrapper[5024]: I1128 17:40:04.724586 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w66lp"] Nov 28 17:40:04 crc kubenswrapper[5024]: I1128 17:40:04.728321 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:04 crc kubenswrapper[5024]: I1128 17:40:04.739874 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w66lp"] Nov 28 17:40:04 crc kubenswrapper[5024]: I1128 17:40:04.794103 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-utilities\") pod \"certified-operators-w66lp\" (UID: \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\") " pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:04 crc kubenswrapper[5024]: I1128 17:40:04.794292 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6nmd\" (UniqueName: \"kubernetes.io/projected/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-kube-api-access-r6nmd\") pod \"certified-operators-w66lp\" (UID: \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\") " pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:04 crc kubenswrapper[5024]: I1128 17:40:04.794767 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-catalog-content\") pod \"certified-operators-w66lp\" (UID: \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\") " pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:04 crc kubenswrapper[5024]: I1128 17:40:04.897592 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-utilities\") pod \"certified-operators-w66lp\" (UID: \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\") " pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:04 crc kubenswrapper[5024]: I1128 17:40:04.897696 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6nmd\" (UniqueName: \"kubernetes.io/projected/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-kube-api-access-r6nmd\") pod \"certified-operators-w66lp\" (UID: \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\") " pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:04 crc kubenswrapper[5024]: I1128 17:40:04.897990 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-catalog-content\") pod \"certified-operators-w66lp\" (UID: \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\") " pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:04 crc kubenswrapper[5024]: I1128 17:40:04.898122 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-utilities\") pod \"certified-operators-w66lp\" (UID: \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\") " pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:04 crc kubenswrapper[5024]: I1128 17:40:04.898551 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-catalog-content\") pod \"certified-operators-w66lp\" (UID: \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\") " pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:04 crc kubenswrapper[5024]: I1128 17:40:04.940047 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6nmd\" (UniqueName: \"kubernetes.io/projected/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-kube-api-access-r6nmd\") pod \"certified-operators-w66lp\" (UID: \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\") " pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:05 crc kubenswrapper[5024]: I1128 17:40:05.057768 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:05 crc kubenswrapper[5024]: I1128 17:40:05.711041 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w66lp"] Nov 28 17:40:06 crc kubenswrapper[5024]: I1128 17:40:06.260988 5024 generic.go:334] "Generic (PLEG): container finished" podID="78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" containerID="cd4b10ca1dcf5235df9397eac1b35a3b86be0a73c2952406c8918a10d162c634" exitCode=0 Nov 28 17:40:06 crc kubenswrapper[5024]: I1128 17:40:06.261069 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66lp" event={"ID":"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f","Type":"ContainerDied","Data":"cd4b10ca1dcf5235df9397eac1b35a3b86be0a73c2952406c8918a10d162c634"} Nov 28 17:40:06 crc kubenswrapper[5024]: I1128 17:40:06.261606 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66lp" event={"ID":"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f","Type":"ContainerStarted","Data":"b0da9cd0dadd40a2e9d1840a0559b73f803d33f31001a92595c8873816b2321e"} Nov 28 17:40:09 crc kubenswrapper[5024]: I1128 17:40:09.365729 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66lp" event={"ID":"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f","Type":"ContainerStarted","Data":"0a6a2c40c415a0e31149bbf6af3487eb4d66f8622e6a2663df2544ba35cdc65f"} Nov 28 17:40:10 crc kubenswrapper[5024]: I1128 17:40:10.498357 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:40:10 crc kubenswrapper[5024]: E1128 17:40:10.498742 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:40:11 crc kubenswrapper[5024]: I1128 17:40:11.386825 5024 generic.go:334] "Generic (PLEG): container finished" podID="78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" containerID="0a6a2c40c415a0e31149bbf6af3487eb4d66f8622e6a2663df2544ba35cdc65f" exitCode=0 Nov 28 17:40:11 crc kubenswrapper[5024]: I1128 17:40:11.386895 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66lp" event={"ID":"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f","Type":"ContainerDied","Data":"0a6a2c40c415a0e31149bbf6af3487eb4d66f8622e6a2663df2544ba35cdc65f"} Nov 28 17:40:12 crc kubenswrapper[5024]: I1128 17:40:12.399587 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66lp" event={"ID":"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f","Type":"ContainerStarted","Data":"eb09456974d0ad8aadde0452d59497f8f3963f0620cc6b08aadc01a807588dc7"} Nov 28 17:40:12 crc kubenswrapper[5024]: I1128 17:40:12.437571 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w66lp" podStartSLOduration=2.861496187 podStartE2EDuration="8.437548585s" podCreationTimestamp="2025-11-28 17:40:04 +0000 UTC" firstStartedPulling="2025-11-28 17:40:06.264133327 +0000 UTC m=+2508.313054232" lastFinishedPulling="2025-11-28 17:40:11.840185725 +0000 UTC m=+2513.889106630" observedRunningTime="2025-11-28 17:40:12.426863438 +0000 UTC m=+2514.475784363" watchObservedRunningTime="2025-11-28 17:40:12.437548585 +0000 UTC m=+2514.486469490" Nov 28 17:40:15 crc kubenswrapper[5024]: I1128 17:40:15.058805 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:15 crc kubenswrapper[5024]: I1128 17:40:15.059148 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:15 crc kubenswrapper[5024]: I1128 17:40:15.109972 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:22 crc kubenswrapper[5024]: I1128 17:40:22.497918 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:40:22 crc kubenswrapper[5024]: E1128 17:40:22.498715 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:40:25 crc kubenswrapper[5024]: I1128 17:40:25.107863 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:25 crc kubenswrapper[5024]: I1128 17:40:25.163186 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w66lp"] Nov 28 17:40:25 crc kubenswrapper[5024]: I1128 17:40:25.545466 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w66lp" podUID="78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" containerName="registry-server" containerID="cri-o://eb09456974d0ad8aadde0452d59497f8f3963f0620cc6b08aadc01a807588dc7" gracePeriod=2 Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.219977 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.342503 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-catalog-content\") pod \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\" (UID: \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\") " Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.342720 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6nmd\" (UniqueName: \"kubernetes.io/projected/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-kube-api-access-r6nmd\") pod \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\" (UID: \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\") " Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.342796 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-utilities\") pod \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\" (UID: \"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f\") " Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.343519 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-utilities" (OuterVolumeSpecName: "utilities") pod "78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" (UID: "78aa71a6-a0d3-48e0-828b-4b9a127e5c0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.349247 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-kube-api-access-r6nmd" (OuterVolumeSpecName: "kube-api-access-r6nmd") pod "78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" (UID: "78aa71a6-a0d3-48e0-828b-4b9a127e5c0f"). InnerVolumeSpecName "kube-api-access-r6nmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.391775 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" (UID: "78aa71a6-a0d3-48e0-828b-4b9a127e5c0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.445839 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.445882 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6nmd\" (UniqueName: \"kubernetes.io/projected/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-kube-api-access-r6nmd\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.445899 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.559233 5024 generic.go:334] "Generic (PLEG): container finished" podID="78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" containerID="eb09456974d0ad8aadde0452d59497f8f3963f0620cc6b08aadc01a807588dc7" exitCode=0 Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.559316 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w66lp" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.559308 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66lp" event={"ID":"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f","Type":"ContainerDied","Data":"eb09456974d0ad8aadde0452d59497f8f3963f0620cc6b08aadc01a807588dc7"} Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.559467 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66lp" event={"ID":"78aa71a6-a0d3-48e0-828b-4b9a127e5c0f","Type":"ContainerDied","Data":"b0da9cd0dadd40a2e9d1840a0559b73f803d33f31001a92595c8873816b2321e"} Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.559491 5024 scope.go:117] "RemoveContainer" containerID="eb09456974d0ad8aadde0452d59497f8f3963f0620cc6b08aadc01a807588dc7" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.641490 5024 scope.go:117] "RemoveContainer" containerID="0a6a2c40c415a0e31149bbf6af3487eb4d66f8622e6a2663df2544ba35cdc65f" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.647863 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w66lp"] Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.660583 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w66lp"] Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.667587 5024 scope.go:117] "RemoveContainer" containerID="cd4b10ca1dcf5235df9397eac1b35a3b86be0a73c2952406c8918a10d162c634" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.725219 5024 scope.go:117] "RemoveContainer" containerID="eb09456974d0ad8aadde0452d59497f8f3963f0620cc6b08aadc01a807588dc7" Nov 28 17:40:26 crc kubenswrapper[5024]: E1128 17:40:26.725735 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb09456974d0ad8aadde0452d59497f8f3963f0620cc6b08aadc01a807588dc7\": container with ID starting with eb09456974d0ad8aadde0452d59497f8f3963f0620cc6b08aadc01a807588dc7 not found: ID does not exist" containerID="eb09456974d0ad8aadde0452d59497f8f3963f0620cc6b08aadc01a807588dc7" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.725817 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb09456974d0ad8aadde0452d59497f8f3963f0620cc6b08aadc01a807588dc7"} err="failed to get container status \"eb09456974d0ad8aadde0452d59497f8f3963f0620cc6b08aadc01a807588dc7\": rpc error: code = NotFound desc = could not find container \"eb09456974d0ad8aadde0452d59497f8f3963f0620cc6b08aadc01a807588dc7\": container with ID starting with eb09456974d0ad8aadde0452d59497f8f3963f0620cc6b08aadc01a807588dc7 not found: ID does not exist" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.725868 5024 scope.go:117] "RemoveContainer" containerID="0a6a2c40c415a0e31149bbf6af3487eb4d66f8622e6a2663df2544ba35cdc65f" Nov 28 17:40:26 crc kubenswrapper[5024]: E1128 17:40:26.726410 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a6a2c40c415a0e31149bbf6af3487eb4d66f8622e6a2663df2544ba35cdc65f\": container with ID starting with 0a6a2c40c415a0e31149bbf6af3487eb4d66f8622e6a2663df2544ba35cdc65f not found: ID does not exist" containerID="0a6a2c40c415a0e31149bbf6af3487eb4d66f8622e6a2663df2544ba35cdc65f" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.726452 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a6a2c40c415a0e31149bbf6af3487eb4d66f8622e6a2663df2544ba35cdc65f"} err="failed to get container status \"0a6a2c40c415a0e31149bbf6af3487eb4d66f8622e6a2663df2544ba35cdc65f\": rpc error: code = NotFound desc = could not find container \"0a6a2c40c415a0e31149bbf6af3487eb4d66f8622e6a2663df2544ba35cdc65f\": container with ID starting with 0a6a2c40c415a0e31149bbf6af3487eb4d66f8622e6a2663df2544ba35cdc65f not found: ID does not exist" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.726475 5024 scope.go:117] "RemoveContainer" containerID="cd4b10ca1dcf5235df9397eac1b35a3b86be0a73c2952406c8918a10d162c634" Nov 28 17:40:26 crc kubenswrapper[5024]: E1128 17:40:26.727004 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd4b10ca1dcf5235df9397eac1b35a3b86be0a73c2952406c8918a10d162c634\": container with ID starting with cd4b10ca1dcf5235df9397eac1b35a3b86be0a73c2952406c8918a10d162c634 not found: ID does not exist" containerID="cd4b10ca1dcf5235df9397eac1b35a3b86be0a73c2952406c8918a10d162c634" Nov 28 17:40:26 crc kubenswrapper[5024]: I1128 17:40:26.727087 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd4b10ca1dcf5235df9397eac1b35a3b86be0a73c2952406c8918a10d162c634"} err="failed to get container status \"cd4b10ca1dcf5235df9397eac1b35a3b86be0a73c2952406c8918a10d162c634\": rpc error: code = NotFound desc = could not find container \"cd4b10ca1dcf5235df9397eac1b35a3b86be0a73c2952406c8918a10d162c634\": container with ID starting with cd4b10ca1dcf5235df9397eac1b35a3b86be0a73c2952406c8918a10d162c634 not found: ID does not exist" Nov 28 17:40:28 crc kubenswrapper[5024]: I1128 17:40:28.511828 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" path="/var/lib/kubelet/pods/78aa71a6-a0d3-48e0-828b-4b9a127e5c0f/volumes" Nov 28 17:40:35 crc kubenswrapper[5024]: I1128 17:40:35.498070 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:40:35 crc kubenswrapper[5024]: E1128 17:40:35.498855 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:40:50 crc kubenswrapper[5024]: I1128 17:40:50.498499 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:40:50 crc kubenswrapper[5024]: E1128 17:40:50.499475 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:40:51 crc kubenswrapper[5024]: I1128 17:40:51.852013 5024 generic.go:334] "Generic (PLEG): container finished" podID="a052b839-2b8d-4f97-afc6-29279c78dbdc" containerID="fb0a19aebdaf1f0817363a0c5068b27cde10e7c4a5b956b0659ff3767f073174" exitCode=0 Nov 28 17:40:51 crc kubenswrapper[5024]: I1128 17:40:51.852052 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" event={"ID":"a052b839-2b8d-4f97-afc6-29279c78dbdc","Type":"ContainerDied","Data":"fb0a19aebdaf1f0817363a0c5068b27cde10e7c4a5b956b0659ff3767f073174"} Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.343054 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.462951 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-neutron-metadata-combined-ca-bundle\") pod \"a052b839-2b8d-4f97-afc6-29279c78dbdc\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.463439 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-nova-metadata-neutron-config-0\") pod \"a052b839-2b8d-4f97-afc6-29279c78dbdc\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.463521 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"a052b839-2b8d-4f97-afc6-29279c78dbdc\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.463561 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmlpm\" (UniqueName: \"kubernetes.io/projected/a052b839-2b8d-4f97-afc6-29279c78dbdc-kube-api-access-wmlpm\") pod \"a052b839-2b8d-4f97-afc6-29279c78dbdc\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.463611 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-ssh-key\") pod \"a052b839-2b8d-4f97-afc6-29279c78dbdc\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.463698 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-inventory\") pod \"a052b839-2b8d-4f97-afc6-29279c78dbdc\" (UID: \"a052b839-2b8d-4f97-afc6-29279c78dbdc\") " Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.469856 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a052b839-2b8d-4f97-afc6-29279c78dbdc-kube-api-access-wmlpm" (OuterVolumeSpecName: "kube-api-access-wmlpm") pod "a052b839-2b8d-4f97-afc6-29279c78dbdc" (UID: "a052b839-2b8d-4f97-afc6-29279c78dbdc"). InnerVolumeSpecName "kube-api-access-wmlpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.478466 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a052b839-2b8d-4f97-afc6-29279c78dbdc" (UID: "a052b839-2b8d-4f97-afc6-29279c78dbdc"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.499736 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "a052b839-2b8d-4f97-afc6-29279c78dbdc" (UID: "a052b839-2b8d-4f97-afc6-29279c78dbdc"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.502108 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-inventory" (OuterVolumeSpecName: "inventory") pod "a052b839-2b8d-4f97-afc6-29279c78dbdc" (UID: "a052b839-2b8d-4f97-afc6-29279c78dbdc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.502161 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "a052b839-2b8d-4f97-afc6-29279c78dbdc" (UID: "a052b839-2b8d-4f97-afc6-29279c78dbdc"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.505213 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a052b839-2b8d-4f97-afc6-29279c78dbdc" (UID: "a052b839-2b8d-4f97-afc6-29279c78dbdc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.567499 5024 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.567550 5024 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.567565 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmlpm\" (UniqueName: \"kubernetes.io/projected/a052b839-2b8d-4f97-afc6-29279c78dbdc-kube-api-access-wmlpm\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.567578 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.567590 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.567602 5024 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a052b839-2b8d-4f97-afc6-29279c78dbdc-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.879330 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" event={"ID":"a052b839-2b8d-4f97-afc6-29279c78dbdc","Type":"ContainerDied","Data":"13f60a574b5420d99e7a0a8fb7551a3896001672f0f994cdeac2341e1df402ea"} Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.879368 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d" Nov 28 17:40:53 crc kubenswrapper[5024]: I1128 17:40:53.879382 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13f60a574b5420d99e7a0a8fb7551a3896001672f0f994cdeac2341e1df402ea" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.059850 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx"] Nov 28 17:40:54 crc kubenswrapper[5024]: E1128 17:40:54.060704 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a052b839-2b8d-4f97-afc6-29279c78dbdc" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.060730 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a052b839-2b8d-4f97-afc6-29279c78dbdc" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 17:40:54 crc kubenswrapper[5024]: E1128 17:40:54.060780 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" containerName="registry-server" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.060791 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" containerName="registry-server" Nov 28 17:40:54 crc kubenswrapper[5024]: E1128 17:40:54.060811 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" containerName="extract-utilities" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.060819 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" containerName="extract-utilities" Nov 28 17:40:54 crc kubenswrapper[5024]: E1128 17:40:54.060845 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" containerName="extract-content" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.060855 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" containerName="extract-content" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.061217 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a052b839-2b8d-4f97-afc6-29279c78dbdc" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.061245 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="78aa71a6-a0d3-48e0-828b-4b9a127e5c0f" containerName="registry-server" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.062283 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.064680 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.064873 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.065342 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.066877 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.067102 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.075010 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx"] Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.195983 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9x4p\" (UniqueName: \"kubernetes.io/projected/0c74575c-09fd-4190-9781-0e1e98d85d85-kube-api-access-j9x4p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.196348 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.196450 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.196576 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.196816 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.299784 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.299856 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9x4p\" (UniqueName: \"kubernetes.io/projected/0c74575c-09fd-4190-9781-0e1e98d85d85-kube-api-access-j9x4p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.299927 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.299952 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.299979 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.304210 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.304233 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.304443 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.304749 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.317136 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9x4p\" (UniqueName: \"kubernetes.io/projected/0c74575c-09fd-4190-9781-0e1e98d85d85-kube-api-access-j9x4p\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.409216 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:40:54 crc kubenswrapper[5024]: I1128 17:40:54.958011 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx"] Nov 28 17:40:54 crc kubenswrapper[5024]: W1128 17:40:54.959339 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c74575c_09fd_4190_9781_0e1e98d85d85.slice/crio-32148fac4b5e09fbede2248e99da453a6144679aba02557e56b63c77e9d325e6 WatchSource:0}: Error finding container 32148fac4b5e09fbede2248e99da453a6144679aba02557e56b63c77e9d325e6: Status 404 returned error can't find the container with id 32148fac4b5e09fbede2248e99da453a6144679aba02557e56b63c77e9d325e6 Nov 28 17:40:55 crc kubenswrapper[5024]: I1128 17:40:55.902554 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" event={"ID":"0c74575c-09fd-4190-9781-0e1e98d85d85","Type":"ContainerStarted","Data":"d9b0d05ebf0f68eeb080d48c9d6e3e0f1617d2a4e736c7a480e981c142080f12"} Nov 28 17:40:55 crc kubenswrapper[5024]: I1128 17:40:55.903312 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" event={"ID":"0c74575c-09fd-4190-9781-0e1e98d85d85","Type":"ContainerStarted","Data":"32148fac4b5e09fbede2248e99da453a6144679aba02557e56b63c77e9d325e6"} Nov 28 17:40:55 crc kubenswrapper[5024]: I1128 17:40:55.932057 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" podStartSLOduration=1.518084328 podStartE2EDuration="1.932035643s" podCreationTimestamp="2025-11-28 17:40:54 +0000 UTC" firstStartedPulling="2025-11-28 17:40:54.96359946 +0000 UTC m=+2557.012520355" lastFinishedPulling="2025-11-28 17:40:55.377550765 +0000 UTC m=+2557.426471670" observedRunningTime="2025-11-28 17:40:55.917212116 +0000 UTC m=+2557.966133051" watchObservedRunningTime="2025-11-28 17:40:55.932035643 +0000 UTC m=+2557.980956548" Nov 28 17:41:04 crc kubenswrapper[5024]: I1128 17:41:04.498556 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:41:04 crc kubenswrapper[5024]: E1128 17:41:04.499904 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:41:16 crc kubenswrapper[5024]: I1128 17:41:16.498934 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:41:16 crc kubenswrapper[5024]: E1128 17:41:16.499900 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:41:27 crc kubenswrapper[5024]: I1128 17:41:27.499241 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:41:27 crc kubenswrapper[5024]: E1128 17:41:27.500208 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:41:38 crc kubenswrapper[5024]: I1128 17:41:38.509087 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:41:39 crc kubenswrapper[5024]: I1128 17:41:39.380048 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"fe10743f8ef10d0cc481623f4350e242bd45d62a69009ce1616be5319adfb435"} Nov 28 17:44:07 crc kubenswrapper[5024]: I1128 17:44:07.959190 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:44:07 crc kubenswrapper[5024]: I1128 17:44:07.959666 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:44:37 crc kubenswrapper[5024]: I1128 17:44:37.565472 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:44:37 crc kubenswrapper[5024]: I1128 17:44:37.567272 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.149695 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8"] Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.152239 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.154106 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.154610 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.162416 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8"] Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.256202 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6778bff2-d762-4d52-9833-248e57acab6e-secret-volume\") pod \"collect-profiles-29405865-zxzp8\" (UID: \"6778bff2-d762-4d52-9833-248e57acab6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.256284 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6778bff2-d762-4d52-9833-248e57acab6e-config-volume\") pod \"collect-profiles-29405865-zxzp8\" (UID: \"6778bff2-d762-4d52-9833-248e57acab6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.257056 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn587\" (UniqueName: \"kubernetes.io/projected/6778bff2-d762-4d52-9833-248e57acab6e-kube-api-access-sn587\") pod \"collect-profiles-29405865-zxzp8\" (UID: \"6778bff2-d762-4d52-9833-248e57acab6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.359418 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn587\" (UniqueName: \"kubernetes.io/projected/6778bff2-d762-4d52-9833-248e57acab6e-kube-api-access-sn587\") pod \"collect-profiles-29405865-zxzp8\" (UID: \"6778bff2-d762-4d52-9833-248e57acab6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.359477 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6778bff2-d762-4d52-9833-248e57acab6e-secret-volume\") pod \"collect-profiles-29405865-zxzp8\" (UID: \"6778bff2-d762-4d52-9833-248e57acab6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.359519 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6778bff2-d762-4d52-9833-248e57acab6e-config-volume\") pod \"collect-profiles-29405865-zxzp8\" (UID: \"6778bff2-d762-4d52-9833-248e57acab6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.361787 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6778bff2-d762-4d52-9833-248e57acab6e-config-volume\") pod \"collect-profiles-29405865-zxzp8\" (UID: \"6778bff2-d762-4d52-9833-248e57acab6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.365783 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6778bff2-d762-4d52-9833-248e57acab6e-secret-volume\") pod \"collect-profiles-29405865-zxzp8\" (UID: \"6778bff2-d762-4d52-9833-248e57acab6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.382976 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn587\" (UniqueName: \"kubernetes.io/projected/6778bff2-d762-4d52-9833-248e57acab6e-kube-api-access-sn587\") pod \"collect-profiles-29405865-zxzp8\" (UID: \"6778bff2-d762-4d52-9833-248e57acab6e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.472490 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" Nov 28 17:45:00 crc kubenswrapper[5024]: I1128 17:45:00.952868 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8"] Nov 28 17:45:01 crc kubenswrapper[5024]: I1128 17:45:01.635942 5024 generic.go:334] "Generic (PLEG): container finished" podID="6778bff2-d762-4d52-9833-248e57acab6e" containerID="2ba34b8bea593369d86fd6cb11ee0cfaed9b10c5ecbb5c2a48598033a2bcf63f" exitCode=0 Nov 28 17:45:01 crc kubenswrapper[5024]: I1128 17:45:01.636064 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" event={"ID":"6778bff2-d762-4d52-9833-248e57acab6e","Type":"ContainerDied","Data":"2ba34b8bea593369d86fd6cb11ee0cfaed9b10c5ecbb5c2a48598033a2bcf63f"} Nov 28 17:45:01 crc kubenswrapper[5024]: I1128 17:45:01.636578 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" event={"ID":"6778bff2-d762-4d52-9833-248e57acab6e","Type":"ContainerStarted","Data":"c901629ecf07a1d2e625b69eff583140ca294bc2e45871ace53a3775b42f7fa8"} Nov 28 17:45:03 crc kubenswrapper[5024]: I1128 17:45:03.117201 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" Nov 28 17:45:03 crc kubenswrapper[5024]: I1128 17:45:03.233610 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6778bff2-d762-4d52-9833-248e57acab6e-secret-volume\") pod \"6778bff2-d762-4d52-9833-248e57acab6e\" (UID: \"6778bff2-d762-4d52-9833-248e57acab6e\") " Nov 28 17:45:03 crc kubenswrapper[5024]: I1128 17:45:03.233694 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn587\" (UniqueName: \"kubernetes.io/projected/6778bff2-d762-4d52-9833-248e57acab6e-kube-api-access-sn587\") pod \"6778bff2-d762-4d52-9833-248e57acab6e\" (UID: \"6778bff2-d762-4d52-9833-248e57acab6e\") " Nov 28 17:45:03 crc kubenswrapper[5024]: I1128 17:45:03.233758 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6778bff2-d762-4d52-9833-248e57acab6e-config-volume\") pod \"6778bff2-d762-4d52-9833-248e57acab6e\" (UID: \"6778bff2-d762-4d52-9833-248e57acab6e\") " Nov 28 17:45:03 crc kubenswrapper[5024]: I1128 17:45:03.234533 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6778bff2-d762-4d52-9833-248e57acab6e-config-volume" (OuterVolumeSpecName: "config-volume") pod "6778bff2-d762-4d52-9833-248e57acab6e" (UID: "6778bff2-d762-4d52-9833-248e57acab6e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:45:03 crc kubenswrapper[5024]: I1128 17:45:03.234905 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6778bff2-d762-4d52-9833-248e57acab6e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:03 crc kubenswrapper[5024]: I1128 17:45:03.239233 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6778bff2-d762-4d52-9833-248e57acab6e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6778bff2-d762-4d52-9833-248e57acab6e" (UID: "6778bff2-d762-4d52-9833-248e57acab6e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:03 crc kubenswrapper[5024]: I1128 17:45:03.240660 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6778bff2-d762-4d52-9833-248e57acab6e-kube-api-access-sn587" (OuterVolumeSpecName: "kube-api-access-sn587") pod "6778bff2-d762-4d52-9833-248e57acab6e" (UID: "6778bff2-d762-4d52-9833-248e57acab6e"). InnerVolumeSpecName "kube-api-access-sn587". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:45:03 crc kubenswrapper[5024]: I1128 17:45:03.336984 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6778bff2-d762-4d52-9833-248e57acab6e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:03 crc kubenswrapper[5024]: I1128 17:45:03.337028 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn587\" (UniqueName: \"kubernetes.io/projected/6778bff2-d762-4d52-9833-248e57acab6e-kube-api-access-sn587\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:03 crc kubenswrapper[5024]: I1128 17:45:03.664872 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" event={"ID":"6778bff2-d762-4d52-9833-248e57acab6e","Type":"ContainerDied","Data":"c901629ecf07a1d2e625b69eff583140ca294bc2e45871ace53a3775b42f7fa8"} Nov 28 17:45:03 crc kubenswrapper[5024]: I1128 17:45:03.665097 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c901629ecf07a1d2e625b69eff583140ca294bc2e45871ace53a3775b42f7fa8" Nov 28 17:45:03 crc kubenswrapper[5024]: I1128 17:45:03.665194 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8" Nov 28 17:45:04 crc kubenswrapper[5024]: I1128 17:45:04.200590 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c"] Nov 28 17:45:04 crc kubenswrapper[5024]: I1128 17:45:04.213698 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405820-dgz4c"] Nov 28 17:45:04 crc kubenswrapper[5024]: I1128 17:45:04.521846 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a" path="/var/lib/kubelet/pods/fbbc3e77-bdd7-4ca2-bbed-bcf4d118385a/volumes" Nov 28 17:45:05 crc kubenswrapper[5024]: I1128 17:45:05.689154 5024 generic.go:334] "Generic (PLEG): container finished" podID="0c74575c-09fd-4190-9781-0e1e98d85d85" containerID="d9b0d05ebf0f68eeb080d48c9d6e3e0f1617d2a4e736c7a480e981c142080f12" exitCode=0 Nov 28 17:45:05 crc kubenswrapper[5024]: I1128 17:45:05.689276 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" event={"ID":"0c74575c-09fd-4190-9781-0e1e98d85d85","Type":"ContainerDied","Data":"d9b0d05ebf0f68eeb080d48c9d6e3e0f1617d2a4e736c7a480e981c142080f12"} Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.196115 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.254654 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-libvirt-secret-0\") pod \"0c74575c-09fd-4190-9781-0e1e98d85d85\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.254832 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-libvirt-combined-ca-bundle\") pod \"0c74575c-09fd-4190-9781-0e1e98d85d85\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.254958 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9x4p\" (UniqueName: \"kubernetes.io/projected/0c74575c-09fd-4190-9781-0e1e98d85d85-kube-api-access-j9x4p\") pod \"0c74575c-09fd-4190-9781-0e1e98d85d85\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.254991 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-inventory\") pod \"0c74575c-09fd-4190-9781-0e1e98d85d85\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.255084 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-ssh-key\") pod \"0c74575c-09fd-4190-9781-0e1e98d85d85\" (UID: \"0c74575c-09fd-4190-9781-0e1e98d85d85\") " Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.261509 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "0c74575c-09fd-4190-9781-0e1e98d85d85" (UID: "0c74575c-09fd-4190-9781-0e1e98d85d85"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.261624 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c74575c-09fd-4190-9781-0e1e98d85d85-kube-api-access-j9x4p" (OuterVolumeSpecName: "kube-api-access-j9x4p") pod "0c74575c-09fd-4190-9781-0e1e98d85d85" (UID: "0c74575c-09fd-4190-9781-0e1e98d85d85"). InnerVolumeSpecName "kube-api-access-j9x4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.289438 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-inventory" (OuterVolumeSpecName: "inventory") pod "0c74575c-09fd-4190-9781-0e1e98d85d85" (UID: "0c74575c-09fd-4190-9781-0e1e98d85d85"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.293289 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0c74575c-09fd-4190-9781-0e1e98d85d85" (UID: "0c74575c-09fd-4190-9781-0e1e98d85d85"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.294226 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "0c74575c-09fd-4190-9781-0e1e98d85d85" (UID: "0c74575c-09fd-4190-9781-0e1e98d85d85"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.358614 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9x4p\" (UniqueName: \"kubernetes.io/projected/0c74575c-09fd-4190-9781-0e1e98d85d85-kube-api-access-j9x4p\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.358921 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.358939 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.358952 5024 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.358963 5024 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c74575c-09fd-4190-9781-0e1e98d85d85-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.564734 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.564783 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.564828 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.565828 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe10743f8ef10d0cc481623f4350e242bd45d62a69009ce1616be5319adfb435"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.565891 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://fe10743f8ef10d0cc481623f4350e242bd45d62a69009ce1616be5319adfb435" gracePeriod=600 Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.712174 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" event={"ID":"0c74575c-09fd-4190-9781-0e1e98d85d85","Type":"ContainerDied","Data":"32148fac4b5e09fbede2248e99da453a6144679aba02557e56b63c77e9d325e6"} Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.712245 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32148fac4b5e09fbede2248e99da453a6144679aba02557e56b63c77e9d325e6" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.712199 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.714302 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="fe10743f8ef10d0cc481623f4350e242bd45d62a69009ce1616be5319adfb435" exitCode=0 Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.714347 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"fe10743f8ef10d0cc481623f4350e242bd45d62a69009ce1616be5319adfb435"} Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.714406 5024 scope.go:117] "RemoveContainer" containerID="3150b471088edc2efcc4552b475a2cb2415837617571d8bd8f88aac77b1b5240" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.814988 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w"] Nov 28 17:45:07 crc kubenswrapper[5024]: E1128 17:45:07.815774 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6778bff2-d762-4d52-9833-248e57acab6e" containerName="collect-profiles" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.815804 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="6778bff2-d762-4d52-9833-248e57acab6e" containerName="collect-profiles" Nov 28 17:45:07 crc kubenswrapper[5024]: E1128 17:45:07.815829 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c74575c-09fd-4190-9781-0e1e98d85d85" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.815839 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c74575c-09fd-4190-9781-0e1e98d85d85" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.816232 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="6778bff2-d762-4d52-9833-248e57acab6e" containerName="collect-profiles" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.816262 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c74575c-09fd-4190-9781-0e1e98d85d85" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.817346 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.824586 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.824684 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.824856 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.824951 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.825317 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.825428 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.825646 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.870256 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.870321 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.870372 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.870627 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5db4\" (UniqueName: \"kubernetes.io/projected/98dfedf7-c96b-4029-8893-74f4abd9124b-kube-api-access-f5db4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.870725 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.870835 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.870955 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.871147 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.871212 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.874994 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w"] Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.972929 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.973320 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.973396 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.973443 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.973493 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.973668 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5db4\" (UniqueName: \"kubernetes.io/projected/98dfedf7-c96b-4029-8893-74f4abd9124b-kube-api-access-f5db4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.973715 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.973773 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.973847 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.975045 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.978764 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.979215 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.980106 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.980475 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.981114 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.981262 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.981507 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:07 crc kubenswrapper[5024]: I1128 17:45:07.990820 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5db4\" (UniqueName: \"kubernetes.io/projected/98dfedf7-c96b-4029-8893-74f4abd9124b-kube-api-access-f5db4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pkt6w\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:08 crc kubenswrapper[5024]: I1128 17:45:08.201828 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:45:08 crc kubenswrapper[5024]: I1128 17:45:08.727143 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d"} Nov 28 17:45:08 crc kubenswrapper[5024]: I1128 17:45:08.857643 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w"] Nov 28 17:45:08 crc kubenswrapper[5024]: I1128 17:45:08.861534 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:45:09 crc kubenswrapper[5024]: I1128 17:45:09.744075 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" event={"ID":"98dfedf7-c96b-4029-8893-74f4abd9124b","Type":"ContainerStarted","Data":"be4acb2593fbbb9f9c41657c6d690b344a7183f8a5bceb33847e1cb53694cc04"} Nov 28 17:45:10 crc kubenswrapper[5024]: I1128 17:45:10.754398 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" event={"ID":"98dfedf7-c96b-4029-8893-74f4abd9124b","Type":"ContainerStarted","Data":"76218fbc5707116373b9bdaec63e87e9c6fc3bd5706fedda479dc1695d2406de"} Nov 28 17:45:10 crc kubenswrapper[5024]: I1128 17:45:10.778402 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" podStartSLOduration=3.125544912 podStartE2EDuration="3.77836968s" podCreationTimestamp="2025-11-28 17:45:07 +0000 UTC" firstStartedPulling="2025-11-28 17:45:08.861325603 +0000 UTC m=+2810.910246508" lastFinishedPulling="2025-11-28 17:45:09.514150371 +0000 UTC m=+2811.563071276" observedRunningTime="2025-11-28 17:45:10.772321115 +0000 UTC m=+2812.821242030" watchObservedRunningTime="2025-11-28 17:45:10.77836968 +0000 UTC m=+2812.827290585" Nov 28 17:45:53 crc kubenswrapper[5024]: I1128 17:45:53.801690 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-74757657c9-s2n28" podUID="634068c7-593f-43ee-8b4e-4be8f66c51c5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 28 17:45:57 crc kubenswrapper[5024]: I1128 17:45:57.306693 5024 scope.go:117] "RemoveContainer" containerID="763f439d1b9a70e804ea009d13e823966fef4de6bd0f6ff7e2831fba5e990d9c" Nov 28 17:46:02 crc kubenswrapper[5024]: I1128 17:46:02.538352 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v6lcn"] Nov 28 17:46:02 crc kubenswrapper[5024]: I1128 17:46:02.555772 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v6lcn"] Nov 28 17:46:02 crc kubenswrapper[5024]: I1128 17:46:02.555932 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:02 crc kubenswrapper[5024]: I1128 17:46:02.576342 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrpnc\" (UniqueName: \"kubernetes.io/projected/e373024a-8b55-49c9-a147-f4ff7b232fc6-kube-api-access-jrpnc\") pod \"redhat-operators-v6lcn\" (UID: \"e373024a-8b55-49c9-a147-f4ff7b232fc6\") " pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:02 crc kubenswrapper[5024]: I1128 17:46:02.576489 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e373024a-8b55-49c9-a147-f4ff7b232fc6-catalog-content\") pod \"redhat-operators-v6lcn\" (UID: \"e373024a-8b55-49c9-a147-f4ff7b232fc6\") " pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:02 crc kubenswrapper[5024]: I1128 17:46:02.576562 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e373024a-8b55-49c9-a147-f4ff7b232fc6-utilities\") pod \"redhat-operators-v6lcn\" (UID: \"e373024a-8b55-49c9-a147-f4ff7b232fc6\") " pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:02 crc kubenswrapper[5024]: I1128 17:46:02.679896 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrpnc\" (UniqueName: \"kubernetes.io/projected/e373024a-8b55-49c9-a147-f4ff7b232fc6-kube-api-access-jrpnc\") pod \"redhat-operators-v6lcn\" (UID: \"e373024a-8b55-49c9-a147-f4ff7b232fc6\") " pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:02 crc kubenswrapper[5024]: I1128 17:46:02.680041 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e373024a-8b55-49c9-a147-f4ff7b232fc6-catalog-content\") pod \"redhat-operators-v6lcn\" (UID: \"e373024a-8b55-49c9-a147-f4ff7b232fc6\") " pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:02 crc kubenswrapper[5024]: I1128 17:46:02.680089 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e373024a-8b55-49c9-a147-f4ff7b232fc6-utilities\") pod \"redhat-operators-v6lcn\" (UID: \"e373024a-8b55-49c9-a147-f4ff7b232fc6\") " pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:02 crc kubenswrapper[5024]: I1128 17:46:02.680718 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e373024a-8b55-49c9-a147-f4ff7b232fc6-utilities\") pod \"redhat-operators-v6lcn\" (UID: \"e373024a-8b55-49c9-a147-f4ff7b232fc6\") " pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:02 crc kubenswrapper[5024]: I1128 17:46:02.680817 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e373024a-8b55-49c9-a147-f4ff7b232fc6-catalog-content\") pod \"redhat-operators-v6lcn\" (UID: \"e373024a-8b55-49c9-a147-f4ff7b232fc6\") " pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:02 crc kubenswrapper[5024]: I1128 17:46:02.759244 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrpnc\" (UniqueName: \"kubernetes.io/projected/e373024a-8b55-49c9-a147-f4ff7b232fc6-kube-api-access-jrpnc\") pod \"redhat-operators-v6lcn\" (UID: \"e373024a-8b55-49c9-a147-f4ff7b232fc6\") " pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:02 crc kubenswrapper[5024]: I1128 17:46:02.890656 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:03 crc kubenswrapper[5024]: I1128 17:46:03.449469 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v6lcn"] Nov 28 17:46:04 crc kubenswrapper[5024]: I1128 17:46:04.304808 5024 generic.go:334] "Generic (PLEG): container finished" podID="e373024a-8b55-49c9-a147-f4ff7b232fc6" containerID="a20085f5a9d9ff7861cfc5231adfb8cfa315dc1709754712bf54c098923047ab" exitCode=0 Nov 28 17:46:04 crc kubenswrapper[5024]: I1128 17:46:04.304924 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6lcn" event={"ID":"e373024a-8b55-49c9-a147-f4ff7b232fc6","Type":"ContainerDied","Data":"a20085f5a9d9ff7861cfc5231adfb8cfa315dc1709754712bf54c098923047ab"} Nov 28 17:46:04 crc kubenswrapper[5024]: I1128 17:46:04.305769 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6lcn" event={"ID":"e373024a-8b55-49c9-a147-f4ff7b232fc6","Type":"ContainerStarted","Data":"f4ac50da2f6592a0899bcf622fcda37ab78dba688b9187f4069f541f3dcc0e6d"} Nov 28 17:46:06 crc kubenswrapper[5024]: I1128 17:46:06.331669 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6lcn" event={"ID":"e373024a-8b55-49c9-a147-f4ff7b232fc6","Type":"ContainerStarted","Data":"527d7a926cb9043f8bb4c18e1cdb7d9151b0633ad3b79d27ccf683cd188f538b"} Nov 28 17:46:10 crc kubenswrapper[5024]: I1128 17:46:10.381337 5024 generic.go:334] "Generic (PLEG): container finished" podID="e373024a-8b55-49c9-a147-f4ff7b232fc6" containerID="527d7a926cb9043f8bb4c18e1cdb7d9151b0633ad3b79d27ccf683cd188f538b" exitCode=0 Nov 28 17:46:10 crc kubenswrapper[5024]: I1128 17:46:10.381491 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6lcn" event={"ID":"e373024a-8b55-49c9-a147-f4ff7b232fc6","Type":"ContainerDied","Data":"527d7a926cb9043f8bb4c18e1cdb7d9151b0633ad3b79d27ccf683cd188f538b"} Nov 28 17:46:11 crc kubenswrapper[5024]: I1128 17:46:11.394801 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6lcn" event={"ID":"e373024a-8b55-49c9-a147-f4ff7b232fc6","Type":"ContainerStarted","Data":"33b2e16db86a7d98e532c7cd8ccf843d6c6759e65db9509035f3c6f26a2254fe"} Nov 28 17:46:11 crc kubenswrapper[5024]: I1128 17:46:11.419073 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v6lcn" podStartSLOduration=2.8007836680000002 podStartE2EDuration="9.419049273s" podCreationTimestamp="2025-11-28 17:46:02 +0000 UTC" firstStartedPulling="2025-11-28 17:46:04.308307708 +0000 UTC m=+2866.357228613" lastFinishedPulling="2025-11-28 17:46:10.926573303 +0000 UTC m=+2872.975494218" observedRunningTime="2025-11-28 17:46:11.409752024 +0000 UTC m=+2873.458672939" watchObservedRunningTime="2025-11-28 17:46:11.419049273 +0000 UTC m=+2873.467970178" Nov 28 17:46:12 crc kubenswrapper[5024]: I1128 17:46:12.891474 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:12 crc kubenswrapper[5024]: I1128 17:46:12.891849 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:13 crc kubenswrapper[5024]: I1128 17:46:13.868361 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tgbj2"] Nov 28 17:46:13 crc kubenswrapper[5024]: I1128 17:46:13.872058 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:13 crc kubenswrapper[5024]: I1128 17:46:13.879623 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tgbj2"] Nov 28 17:46:13 crc kubenswrapper[5024]: I1128 17:46:13.891429 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-utilities\") pod \"community-operators-tgbj2\" (UID: \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\") " pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:13 crc kubenswrapper[5024]: I1128 17:46:13.891496 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78tnq\" (UniqueName: \"kubernetes.io/projected/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-kube-api-access-78tnq\") pod \"community-operators-tgbj2\" (UID: \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\") " pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:13 crc kubenswrapper[5024]: I1128 17:46:13.891715 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-catalog-content\") pod \"community-operators-tgbj2\" (UID: \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\") " pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:13 crc kubenswrapper[5024]: I1128 17:46:13.938969 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v6lcn" podUID="e373024a-8b55-49c9-a147-f4ff7b232fc6" containerName="registry-server" probeResult="failure" output=< Nov 28 17:46:13 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 17:46:13 crc kubenswrapper[5024]: > Nov 28 17:46:14 crc kubenswrapper[5024]: I1128 17:46:14.001927 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-utilities\") pod \"community-operators-tgbj2\" (UID: \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\") " pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:14 crc kubenswrapper[5024]: I1128 17:46:14.002076 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78tnq\" (UniqueName: \"kubernetes.io/projected/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-kube-api-access-78tnq\") pod \"community-operators-tgbj2\" (UID: \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\") " pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:14 crc kubenswrapper[5024]: I1128 17:46:14.002267 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-catalog-content\") pod \"community-operators-tgbj2\" (UID: \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\") " pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:14 crc kubenswrapper[5024]: I1128 17:46:14.003206 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-catalog-content\") pod \"community-operators-tgbj2\" (UID: \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\") " pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:14 crc kubenswrapper[5024]: I1128 17:46:14.003521 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-utilities\") pod \"community-operators-tgbj2\" (UID: \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\") " pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:14 crc kubenswrapper[5024]: I1128 17:46:14.093178 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78tnq\" (UniqueName: \"kubernetes.io/projected/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-kube-api-access-78tnq\") pod \"community-operators-tgbj2\" (UID: \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\") " pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:14 crc kubenswrapper[5024]: I1128 17:46:14.197378 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:14 crc kubenswrapper[5024]: I1128 17:46:14.998509 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tgbj2"] Nov 28 17:46:15 crc kubenswrapper[5024]: I1128 17:46:15.459353 5024 generic.go:334] "Generic (PLEG): container finished" podID="84197a7a-0289-42f5-bf5a-ee4b4d0854d1" containerID="bc27c630ba893585f31bd89403c03e773558669ab6d27e9c1106c259676a2aa9" exitCode=0 Nov 28 17:46:15 crc kubenswrapper[5024]: I1128 17:46:15.459434 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgbj2" event={"ID":"84197a7a-0289-42f5-bf5a-ee4b4d0854d1","Type":"ContainerDied","Data":"bc27c630ba893585f31bd89403c03e773558669ab6d27e9c1106c259676a2aa9"} Nov 28 17:46:15 crc kubenswrapper[5024]: I1128 17:46:15.459658 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgbj2" event={"ID":"84197a7a-0289-42f5-bf5a-ee4b4d0854d1","Type":"ContainerStarted","Data":"4faefaa67d6039b6c0bc241109a4e084b79ec1d2231f3c1e8195bde98bd537dc"} Nov 28 17:46:16 crc kubenswrapper[5024]: I1128 17:46:16.473161 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgbj2" event={"ID":"84197a7a-0289-42f5-bf5a-ee4b4d0854d1","Type":"ContainerStarted","Data":"16736201084e6288ba228d95a3df7e1eeef18694fba85d8042083ef49f73a049"} Nov 28 17:46:17 crc kubenswrapper[5024]: I1128 17:46:17.565110 5024 generic.go:334] "Generic (PLEG): container finished" podID="84197a7a-0289-42f5-bf5a-ee4b4d0854d1" containerID="16736201084e6288ba228d95a3df7e1eeef18694fba85d8042083ef49f73a049" exitCode=0 Nov 28 17:46:17 crc kubenswrapper[5024]: I1128 17:46:17.565199 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgbj2" event={"ID":"84197a7a-0289-42f5-bf5a-ee4b4d0854d1","Type":"ContainerDied","Data":"16736201084e6288ba228d95a3df7e1eeef18694fba85d8042083ef49f73a049"} Nov 28 17:46:19 crc kubenswrapper[5024]: I1128 17:46:19.595937 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgbj2" event={"ID":"84197a7a-0289-42f5-bf5a-ee4b4d0854d1","Type":"ContainerStarted","Data":"db77023de8c4b79335c82b327480aeed5bc2e9b11b3a508a7498329aa239530c"} Nov 28 17:46:19 crc kubenswrapper[5024]: I1128 17:46:19.622837 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tgbj2" podStartSLOduration=3.296438946 podStartE2EDuration="6.622812882s" podCreationTimestamp="2025-11-28 17:46:13 +0000 UTC" firstStartedPulling="2025-11-28 17:46:15.462843887 +0000 UTC m=+2877.511764792" lastFinishedPulling="2025-11-28 17:46:18.789217823 +0000 UTC m=+2880.838138728" observedRunningTime="2025-11-28 17:46:19.612598737 +0000 UTC m=+2881.661519662" watchObservedRunningTime="2025-11-28 17:46:19.622812882 +0000 UTC m=+2881.671733787" Nov 28 17:46:23 crc kubenswrapper[5024]: I1128 17:46:23.965266 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v6lcn" podUID="e373024a-8b55-49c9-a147-f4ff7b232fc6" containerName="registry-server" probeResult="failure" output=< Nov 28 17:46:23 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 17:46:23 crc kubenswrapper[5024]: > Nov 28 17:46:24 crc kubenswrapper[5024]: I1128 17:46:24.197617 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:24 crc kubenswrapper[5024]: I1128 17:46:24.197673 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:24 crc kubenswrapper[5024]: I1128 17:46:24.245391 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:24 crc kubenswrapper[5024]: I1128 17:46:24.694177 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:24 crc kubenswrapper[5024]: I1128 17:46:24.754637 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tgbj2"] Nov 28 17:46:26 crc kubenswrapper[5024]: I1128 17:46:26.666863 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tgbj2" podUID="84197a7a-0289-42f5-bf5a-ee4b4d0854d1" containerName="registry-server" containerID="cri-o://db77023de8c4b79335c82b327480aeed5bc2e9b11b3a508a7498329aa239530c" gracePeriod=2 Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.382783 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.422244 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-catalog-content\") pod \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\" (UID: \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\") " Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.422530 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-utilities\") pod \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\" (UID: \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\") " Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.422574 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78tnq\" (UniqueName: \"kubernetes.io/projected/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-kube-api-access-78tnq\") pod \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\" (UID: \"84197a7a-0289-42f5-bf5a-ee4b4d0854d1\") " Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.424658 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-utilities" (OuterVolumeSpecName: "utilities") pod "84197a7a-0289-42f5-bf5a-ee4b4d0854d1" (UID: "84197a7a-0289-42f5-bf5a-ee4b4d0854d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.467881 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84197a7a-0289-42f5-bf5a-ee4b4d0854d1" (UID: "84197a7a-0289-42f5-bf5a-ee4b4d0854d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.476590 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-kube-api-access-78tnq" (OuterVolumeSpecName: "kube-api-access-78tnq") pod "84197a7a-0289-42f5-bf5a-ee4b4d0854d1" (UID: "84197a7a-0289-42f5-bf5a-ee4b4d0854d1"). InnerVolumeSpecName "kube-api-access-78tnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.525913 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.527466 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.527608 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78tnq\" (UniqueName: \"kubernetes.io/projected/84197a7a-0289-42f5-bf5a-ee4b4d0854d1-kube-api-access-78tnq\") on node \"crc\" DevicePath \"\"" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.679166 5024 generic.go:334] "Generic (PLEG): container finished" podID="84197a7a-0289-42f5-bf5a-ee4b4d0854d1" containerID="db77023de8c4b79335c82b327480aeed5bc2e9b11b3a508a7498329aa239530c" exitCode=0 Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.679217 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgbj2" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.679212 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgbj2" event={"ID":"84197a7a-0289-42f5-bf5a-ee4b4d0854d1","Type":"ContainerDied","Data":"db77023de8c4b79335c82b327480aeed5bc2e9b11b3a508a7498329aa239530c"} Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.679267 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgbj2" event={"ID":"84197a7a-0289-42f5-bf5a-ee4b4d0854d1","Type":"ContainerDied","Data":"4faefaa67d6039b6c0bc241109a4e084b79ec1d2231f3c1e8195bde98bd537dc"} Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.679287 5024 scope.go:117] "RemoveContainer" containerID="db77023de8c4b79335c82b327480aeed5bc2e9b11b3a508a7498329aa239530c" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.723374 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tgbj2"] Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.727981 5024 scope.go:117] "RemoveContainer" containerID="16736201084e6288ba228d95a3df7e1eeef18694fba85d8042083ef49f73a049" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.734878 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tgbj2"] Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.759746 5024 scope.go:117] "RemoveContainer" containerID="bc27c630ba893585f31bd89403c03e773558669ab6d27e9c1106c259676a2aa9" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.799912 5024 scope.go:117] "RemoveContainer" containerID="db77023de8c4b79335c82b327480aeed5bc2e9b11b3a508a7498329aa239530c" Nov 28 17:46:27 crc kubenswrapper[5024]: E1128 17:46:27.800412 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db77023de8c4b79335c82b327480aeed5bc2e9b11b3a508a7498329aa239530c\": container with ID starting with db77023de8c4b79335c82b327480aeed5bc2e9b11b3a508a7498329aa239530c not found: ID does not exist" containerID="db77023de8c4b79335c82b327480aeed5bc2e9b11b3a508a7498329aa239530c" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.800461 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db77023de8c4b79335c82b327480aeed5bc2e9b11b3a508a7498329aa239530c"} err="failed to get container status \"db77023de8c4b79335c82b327480aeed5bc2e9b11b3a508a7498329aa239530c\": rpc error: code = NotFound desc = could not find container \"db77023de8c4b79335c82b327480aeed5bc2e9b11b3a508a7498329aa239530c\": container with ID starting with db77023de8c4b79335c82b327480aeed5bc2e9b11b3a508a7498329aa239530c not found: ID does not exist" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.800490 5024 scope.go:117] "RemoveContainer" containerID="16736201084e6288ba228d95a3df7e1eeef18694fba85d8042083ef49f73a049" Nov 28 17:46:27 crc kubenswrapper[5024]: E1128 17:46:27.800869 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16736201084e6288ba228d95a3df7e1eeef18694fba85d8042083ef49f73a049\": container with ID starting with 16736201084e6288ba228d95a3df7e1eeef18694fba85d8042083ef49f73a049 not found: ID does not exist" containerID="16736201084e6288ba228d95a3df7e1eeef18694fba85d8042083ef49f73a049" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.800907 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16736201084e6288ba228d95a3df7e1eeef18694fba85d8042083ef49f73a049"} err="failed to get container status \"16736201084e6288ba228d95a3df7e1eeef18694fba85d8042083ef49f73a049\": rpc error: code = NotFound desc = could not find container \"16736201084e6288ba228d95a3df7e1eeef18694fba85d8042083ef49f73a049\": container with ID starting with 16736201084e6288ba228d95a3df7e1eeef18694fba85d8042083ef49f73a049 not found: ID does not exist" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.800934 5024 scope.go:117] "RemoveContainer" containerID="bc27c630ba893585f31bd89403c03e773558669ab6d27e9c1106c259676a2aa9" Nov 28 17:46:27 crc kubenswrapper[5024]: E1128 17:46:27.801401 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc27c630ba893585f31bd89403c03e773558669ab6d27e9c1106c259676a2aa9\": container with ID starting with bc27c630ba893585f31bd89403c03e773558669ab6d27e9c1106c259676a2aa9 not found: ID does not exist" containerID="bc27c630ba893585f31bd89403c03e773558669ab6d27e9c1106c259676a2aa9" Nov 28 17:46:27 crc kubenswrapper[5024]: I1128 17:46:27.801439 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc27c630ba893585f31bd89403c03e773558669ab6d27e9c1106c259676a2aa9"} err="failed to get container status \"bc27c630ba893585f31bd89403c03e773558669ab6d27e9c1106c259676a2aa9\": rpc error: code = NotFound desc = could not find container \"bc27c630ba893585f31bd89403c03e773558669ab6d27e9c1106c259676a2aa9\": container with ID starting with bc27c630ba893585f31bd89403c03e773558669ab6d27e9c1106c259676a2aa9 not found: ID does not exist" Nov 28 17:46:28 crc kubenswrapper[5024]: I1128 17:46:28.517968 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84197a7a-0289-42f5-bf5a-ee4b4d0854d1" path="/var/lib/kubelet/pods/84197a7a-0289-42f5-bf5a-ee4b4d0854d1/volumes" Nov 28 17:46:32 crc kubenswrapper[5024]: I1128 17:46:32.939544 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:32 crc kubenswrapper[5024]: I1128 17:46:32.991405 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:33 crc kubenswrapper[5024]: I1128 17:46:33.182256 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v6lcn"] Nov 28 17:46:34 crc kubenswrapper[5024]: I1128 17:46:34.756311 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v6lcn" podUID="e373024a-8b55-49c9-a147-f4ff7b232fc6" containerName="registry-server" containerID="cri-o://33b2e16db86a7d98e532c7cd8ccf843d6c6759e65db9509035f3c6f26a2254fe" gracePeriod=2 Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.300732 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.419169 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e373024a-8b55-49c9-a147-f4ff7b232fc6-utilities\") pod \"e373024a-8b55-49c9-a147-f4ff7b232fc6\" (UID: \"e373024a-8b55-49c9-a147-f4ff7b232fc6\") " Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.419561 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e373024a-8b55-49c9-a147-f4ff7b232fc6-catalog-content\") pod \"e373024a-8b55-49c9-a147-f4ff7b232fc6\" (UID: \"e373024a-8b55-49c9-a147-f4ff7b232fc6\") " Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.419669 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrpnc\" (UniqueName: \"kubernetes.io/projected/e373024a-8b55-49c9-a147-f4ff7b232fc6-kube-api-access-jrpnc\") pod \"e373024a-8b55-49c9-a147-f4ff7b232fc6\" (UID: \"e373024a-8b55-49c9-a147-f4ff7b232fc6\") " Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.420143 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e373024a-8b55-49c9-a147-f4ff7b232fc6-utilities" (OuterVolumeSpecName: "utilities") pod "e373024a-8b55-49c9-a147-f4ff7b232fc6" (UID: "e373024a-8b55-49c9-a147-f4ff7b232fc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.420573 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e373024a-8b55-49c9-a147-f4ff7b232fc6-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.426676 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e373024a-8b55-49c9-a147-f4ff7b232fc6-kube-api-access-jrpnc" (OuterVolumeSpecName: "kube-api-access-jrpnc") pod "e373024a-8b55-49c9-a147-f4ff7b232fc6" (UID: "e373024a-8b55-49c9-a147-f4ff7b232fc6"). InnerVolumeSpecName "kube-api-access-jrpnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.524914 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrpnc\" (UniqueName: \"kubernetes.io/projected/e373024a-8b55-49c9-a147-f4ff7b232fc6-kube-api-access-jrpnc\") on node \"crc\" DevicePath \"\"" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.553478 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e373024a-8b55-49c9-a147-f4ff7b232fc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e373024a-8b55-49c9-a147-f4ff7b232fc6" (UID: "e373024a-8b55-49c9-a147-f4ff7b232fc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.626872 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e373024a-8b55-49c9-a147-f4ff7b232fc6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.774743 5024 generic.go:334] "Generic (PLEG): container finished" podID="e373024a-8b55-49c9-a147-f4ff7b232fc6" containerID="33b2e16db86a7d98e532c7cd8ccf843d6c6759e65db9509035f3c6f26a2254fe" exitCode=0 Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.774820 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6lcn" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.774818 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6lcn" event={"ID":"e373024a-8b55-49c9-a147-f4ff7b232fc6","Type":"ContainerDied","Data":"33b2e16db86a7d98e532c7cd8ccf843d6c6759e65db9509035f3c6f26a2254fe"} Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.775169 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6lcn" event={"ID":"e373024a-8b55-49c9-a147-f4ff7b232fc6","Type":"ContainerDied","Data":"f4ac50da2f6592a0899bcf622fcda37ab78dba688b9187f4069f541f3dcc0e6d"} Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.775195 5024 scope.go:117] "RemoveContainer" containerID="33b2e16db86a7d98e532c7cd8ccf843d6c6759e65db9509035f3c6f26a2254fe" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.801251 5024 scope.go:117] "RemoveContainer" containerID="527d7a926cb9043f8bb4c18e1cdb7d9151b0633ad3b79d27ccf683cd188f538b" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.822566 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v6lcn"] Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.834519 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v6lcn"] Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.861914 5024 scope.go:117] "RemoveContainer" containerID="a20085f5a9d9ff7861cfc5231adfb8cfa315dc1709754712bf54c098923047ab" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.895100 5024 scope.go:117] "RemoveContainer" containerID="33b2e16db86a7d98e532c7cd8ccf843d6c6759e65db9509035f3c6f26a2254fe" Nov 28 17:46:35 crc kubenswrapper[5024]: E1128 17:46:35.895667 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33b2e16db86a7d98e532c7cd8ccf843d6c6759e65db9509035f3c6f26a2254fe\": container with ID starting with 33b2e16db86a7d98e532c7cd8ccf843d6c6759e65db9509035f3c6f26a2254fe not found: ID does not exist" containerID="33b2e16db86a7d98e532c7cd8ccf843d6c6759e65db9509035f3c6f26a2254fe" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.895701 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b2e16db86a7d98e532c7cd8ccf843d6c6759e65db9509035f3c6f26a2254fe"} err="failed to get container status \"33b2e16db86a7d98e532c7cd8ccf843d6c6759e65db9509035f3c6f26a2254fe\": rpc error: code = NotFound desc = could not find container \"33b2e16db86a7d98e532c7cd8ccf843d6c6759e65db9509035f3c6f26a2254fe\": container with ID starting with 33b2e16db86a7d98e532c7cd8ccf843d6c6759e65db9509035f3c6f26a2254fe not found: ID does not exist" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.895725 5024 scope.go:117] "RemoveContainer" containerID="527d7a926cb9043f8bb4c18e1cdb7d9151b0633ad3b79d27ccf683cd188f538b" Nov 28 17:46:35 crc kubenswrapper[5024]: E1128 17:46:35.896018 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"527d7a926cb9043f8bb4c18e1cdb7d9151b0633ad3b79d27ccf683cd188f538b\": container with ID starting with 527d7a926cb9043f8bb4c18e1cdb7d9151b0633ad3b79d27ccf683cd188f538b not found: ID does not exist" containerID="527d7a926cb9043f8bb4c18e1cdb7d9151b0633ad3b79d27ccf683cd188f538b" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.896105 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"527d7a926cb9043f8bb4c18e1cdb7d9151b0633ad3b79d27ccf683cd188f538b"} err="failed to get container status \"527d7a926cb9043f8bb4c18e1cdb7d9151b0633ad3b79d27ccf683cd188f538b\": rpc error: code = NotFound desc = could not find container \"527d7a926cb9043f8bb4c18e1cdb7d9151b0633ad3b79d27ccf683cd188f538b\": container with ID starting with 527d7a926cb9043f8bb4c18e1cdb7d9151b0633ad3b79d27ccf683cd188f538b not found: ID does not exist" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.896152 5024 scope.go:117] "RemoveContainer" containerID="a20085f5a9d9ff7861cfc5231adfb8cfa315dc1709754712bf54c098923047ab" Nov 28 17:46:35 crc kubenswrapper[5024]: E1128 17:46:35.896433 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20085f5a9d9ff7861cfc5231adfb8cfa315dc1709754712bf54c098923047ab\": container with ID starting with a20085f5a9d9ff7861cfc5231adfb8cfa315dc1709754712bf54c098923047ab not found: ID does not exist" containerID="a20085f5a9d9ff7861cfc5231adfb8cfa315dc1709754712bf54c098923047ab" Nov 28 17:46:35 crc kubenswrapper[5024]: I1128 17:46:35.896467 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20085f5a9d9ff7861cfc5231adfb8cfa315dc1709754712bf54c098923047ab"} err="failed to get container status \"a20085f5a9d9ff7861cfc5231adfb8cfa315dc1709754712bf54c098923047ab\": rpc error: code = NotFound desc = could not find container \"a20085f5a9d9ff7861cfc5231adfb8cfa315dc1709754712bf54c098923047ab\": container with ID starting with a20085f5a9d9ff7861cfc5231adfb8cfa315dc1709754712bf54c098923047ab not found: ID does not exist" Nov 28 17:46:36 crc kubenswrapper[5024]: I1128 17:46:36.512444 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e373024a-8b55-49c9-a147-f4ff7b232fc6" path="/var/lib/kubelet/pods/e373024a-8b55-49c9-a147-f4ff7b232fc6/volumes" Nov 28 17:47:07 crc kubenswrapper[5024]: I1128 17:47:07.565844 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:47:07 crc kubenswrapper[5024]: I1128 17:47:07.566585 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.120384 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j5v4h"] Nov 28 17:47:11 crc kubenswrapper[5024]: E1128 17:47:11.121483 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84197a7a-0289-42f5-bf5a-ee4b4d0854d1" containerName="extract-utilities" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.121501 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="84197a7a-0289-42f5-bf5a-ee4b4d0854d1" containerName="extract-utilities" Nov 28 17:47:11 crc kubenswrapper[5024]: E1128 17:47:11.121522 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e373024a-8b55-49c9-a147-f4ff7b232fc6" containerName="extract-utilities" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.121530 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e373024a-8b55-49c9-a147-f4ff7b232fc6" containerName="extract-utilities" Nov 28 17:47:11 crc kubenswrapper[5024]: E1128 17:47:11.121546 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e373024a-8b55-49c9-a147-f4ff7b232fc6" containerName="registry-server" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.121555 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e373024a-8b55-49c9-a147-f4ff7b232fc6" containerName="registry-server" Nov 28 17:47:11 crc kubenswrapper[5024]: E1128 17:47:11.121582 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84197a7a-0289-42f5-bf5a-ee4b4d0854d1" containerName="registry-server" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.121589 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="84197a7a-0289-42f5-bf5a-ee4b4d0854d1" containerName="registry-server" Nov 28 17:47:11 crc kubenswrapper[5024]: E1128 17:47:11.121612 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e373024a-8b55-49c9-a147-f4ff7b232fc6" containerName="extract-content" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.121618 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e373024a-8b55-49c9-a147-f4ff7b232fc6" containerName="extract-content" Nov 28 17:47:11 crc kubenswrapper[5024]: E1128 17:47:11.121636 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84197a7a-0289-42f5-bf5a-ee4b4d0854d1" containerName="extract-content" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.121644 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="84197a7a-0289-42f5-bf5a-ee4b4d0854d1" containerName="extract-content" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.121950 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="84197a7a-0289-42f5-bf5a-ee4b4d0854d1" containerName="registry-server" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.121988 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e373024a-8b55-49c9-a147-f4ff7b232fc6" containerName="registry-server" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.124347 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.135578 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5v4h"] Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.222439 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6b3f57b-b4ae-417f-886a-05258157347a-utilities\") pod \"redhat-marketplace-j5v4h\" (UID: \"f6b3f57b-b4ae-417f-886a-05258157347a\") " pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.222561 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6b3f57b-b4ae-417f-886a-05258157347a-catalog-content\") pod \"redhat-marketplace-j5v4h\" (UID: \"f6b3f57b-b4ae-417f-886a-05258157347a\") " pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.222591 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29l5c\" (UniqueName: \"kubernetes.io/projected/f6b3f57b-b4ae-417f-886a-05258157347a-kube-api-access-29l5c\") pod \"redhat-marketplace-j5v4h\" (UID: \"f6b3f57b-b4ae-417f-886a-05258157347a\") " pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.325300 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6b3f57b-b4ae-417f-886a-05258157347a-catalog-content\") pod \"redhat-marketplace-j5v4h\" (UID: \"f6b3f57b-b4ae-417f-886a-05258157347a\") " pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.325368 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29l5c\" (UniqueName: \"kubernetes.io/projected/f6b3f57b-b4ae-417f-886a-05258157347a-kube-api-access-29l5c\") pod \"redhat-marketplace-j5v4h\" (UID: \"f6b3f57b-b4ae-417f-886a-05258157347a\") " pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.325623 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6b3f57b-b4ae-417f-886a-05258157347a-utilities\") pod \"redhat-marketplace-j5v4h\" (UID: \"f6b3f57b-b4ae-417f-886a-05258157347a\") " pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.325739 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6b3f57b-b4ae-417f-886a-05258157347a-catalog-content\") pod \"redhat-marketplace-j5v4h\" (UID: \"f6b3f57b-b4ae-417f-886a-05258157347a\") " pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.326230 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6b3f57b-b4ae-417f-886a-05258157347a-utilities\") pod \"redhat-marketplace-j5v4h\" (UID: \"f6b3f57b-b4ae-417f-886a-05258157347a\") " pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.353987 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29l5c\" (UniqueName: \"kubernetes.io/projected/f6b3f57b-b4ae-417f-886a-05258157347a-kube-api-access-29l5c\") pod \"redhat-marketplace-j5v4h\" (UID: \"f6b3f57b-b4ae-417f-886a-05258157347a\") " pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:11 crc kubenswrapper[5024]: I1128 17:47:11.456697 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:12 crc kubenswrapper[5024]: I1128 17:47:12.046384 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5v4h"] Nov 28 17:47:12 crc kubenswrapper[5024]: I1128 17:47:12.768921 5024 generic.go:334] "Generic (PLEG): container finished" podID="f6b3f57b-b4ae-417f-886a-05258157347a" containerID="de6be29b6a242a5b79b88c06eac6685b4a61a28f3bc6cff5f3ab799e2ae2c4b6" exitCode=0 Nov 28 17:47:12 crc kubenswrapper[5024]: I1128 17:47:12.769214 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5v4h" event={"ID":"f6b3f57b-b4ae-417f-886a-05258157347a","Type":"ContainerDied","Data":"de6be29b6a242a5b79b88c06eac6685b4a61a28f3bc6cff5f3ab799e2ae2c4b6"} Nov 28 17:47:12 crc kubenswrapper[5024]: I1128 17:47:12.769239 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5v4h" event={"ID":"f6b3f57b-b4ae-417f-886a-05258157347a","Type":"ContainerStarted","Data":"34f8098de2f946dcc3ac35852dbe1ec3407982a4bb86de4b57f843084d73651f"} Nov 28 17:47:14 crc kubenswrapper[5024]: I1128 17:47:14.798314 5024 generic.go:334] "Generic (PLEG): container finished" podID="f6b3f57b-b4ae-417f-886a-05258157347a" containerID="16cb3da4f0753b048a74dd11b329dfb9d007f9b9f979de06875ff7d853e3ac90" exitCode=0 Nov 28 17:47:14 crc kubenswrapper[5024]: I1128 17:47:14.798374 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5v4h" event={"ID":"f6b3f57b-b4ae-417f-886a-05258157347a","Type":"ContainerDied","Data":"16cb3da4f0753b048a74dd11b329dfb9d007f9b9f979de06875ff7d853e3ac90"} Nov 28 17:47:15 crc kubenswrapper[5024]: I1128 17:47:15.812977 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5v4h" event={"ID":"f6b3f57b-b4ae-417f-886a-05258157347a","Type":"ContainerStarted","Data":"0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383"} Nov 28 17:47:15 crc kubenswrapper[5024]: I1128 17:47:15.845117 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j5v4h" podStartSLOduration=2.367920856 podStartE2EDuration="4.845095331s" podCreationTimestamp="2025-11-28 17:47:11 +0000 UTC" firstStartedPulling="2025-11-28 17:47:12.770695895 +0000 UTC m=+2934.819616800" lastFinishedPulling="2025-11-28 17:47:15.24787037 +0000 UTC m=+2937.296791275" observedRunningTime="2025-11-28 17:47:15.831320462 +0000 UTC m=+2937.880241377" watchObservedRunningTime="2025-11-28 17:47:15.845095331 +0000 UTC m=+2937.894016226" Nov 28 17:47:21 crc kubenswrapper[5024]: I1128 17:47:21.456849 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:21 crc kubenswrapper[5024]: I1128 17:47:21.457533 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:21 crc kubenswrapper[5024]: I1128 17:47:21.519016 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:21 crc kubenswrapper[5024]: I1128 17:47:21.939303 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:22 crc kubenswrapper[5024]: I1128 17:47:22.000880 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5v4h"] Nov 28 17:47:23 crc kubenswrapper[5024]: I1128 17:47:23.901106 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j5v4h" podUID="f6b3f57b-b4ae-417f-886a-05258157347a" containerName="registry-server" containerID="cri-o://0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383" gracePeriod=2 Nov 28 17:47:24 crc kubenswrapper[5024]: E1128 17:47:24.159712 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6b3f57b_b4ae_417f_886a_05258157347a.slice/crio-0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6b3f57b_b4ae_417f_886a_05258157347a.slice/crio-conmon-0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383.scope\": RecentStats: unable to find data in memory cache]" Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.457207 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.581769 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6b3f57b-b4ae-417f-886a-05258157347a-utilities\") pod \"f6b3f57b-b4ae-417f-886a-05258157347a\" (UID: \"f6b3f57b-b4ae-417f-886a-05258157347a\") " Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.581893 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6b3f57b-b4ae-417f-886a-05258157347a-catalog-content\") pod \"f6b3f57b-b4ae-417f-886a-05258157347a\" (UID: \"f6b3f57b-b4ae-417f-886a-05258157347a\") " Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.582565 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6b3f57b-b4ae-417f-886a-05258157347a-utilities" (OuterVolumeSpecName: "utilities") pod "f6b3f57b-b4ae-417f-886a-05258157347a" (UID: "f6b3f57b-b4ae-417f-886a-05258157347a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.582608 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29l5c\" (UniqueName: \"kubernetes.io/projected/f6b3f57b-b4ae-417f-886a-05258157347a-kube-api-access-29l5c\") pod \"f6b3f57b-b4ae-417f-886a-05258157347a\" (UID: \"f6b3f57b-b4ae-417f-886a-05258157347a\") " Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.583625 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6b3f57b-b4ae-417f-886a-05258157347a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.589869 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b3f57b-b4ae-417f-886a-05258157347a-kube-api-access-29l5c" (OuterVolumeSpecName: "kube-api-access-29l5c") pod "f6b3f57b-b4ae-417f-886a-05258157347a" (UID: "f6b3f57b-b4ae-417f-886a-05258157347a"). InnerVolumeSpecName "kube-api-access-29l5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.602859 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6b3f57b-b4ae-417f-886a-05258157347a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6b3f57b-b4ae-417f-886a-05258157347a" (UID: "f6b3f57b-b4ae-417f-886a-05258157347a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.686181 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29l5c\" (UniqueName: \"kubernetes.io/projected/f6b3f57b-b4ae-417f-886a-05258157347a-kube-api-access-29l5c\") on node \"crc\" DevicePath \"\"" Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.686222 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6b3f57b-b4ae-417f-886a-05258157347a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.913300 5024 generic.go:334] "Generic (PLEG): container finished" podID="f6b3f57b-b4ae-417f-886a-05258157347a" containerID="0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383" exitCode=0 Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.913350 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5v4h" event={"ID":"f6b3f57b-b4ae-417f-886a-05258157347a","Type":"ContainerDied","Data":"0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383"} Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.913379 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5v4h" event={"ID":"f6b3f57b-b4ae-417f-886a-05258157347a","Type":"ContainerDied","Data":"34f8098de2f946dcc3ac35852dbe1ec3407982a4bb86de4b57f843084d73651f"} Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.913401 5024 scope.go:117] "RemoveContainer" containerID="0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383" Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.913540 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5v4h" Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.953220 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5v4h"] Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.963198 5024 scope.go:117] "RemoveContainer" containerID="16cb3da4f0753b048a74dd11b329dfb9d007f9b9f979de06875ff7d853e3ac90" Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.975512 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5v4h"] Nov 28 17:47:24 crc kubenswrapper[5024]: I1128 17:47:24.997671 5024 scope.go:117] "RemoveContainer" containerID="de6be29b6a242a5b79b88c06eac6685b4a61a28f3bc6cff5f3ab799e2ae2c4b6" Nov 28 17:47:25 crc kubenswrapper[5024]: I1128 17:47:25.066664 5024 scope.go:117] "RemoveContainer" containerID="0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383" Nov 28 17:47:25 crc kubenswrapper[5024]: E1128 17:47:25.067210 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383\": container with ID starting with 0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383 not found: ID does not exist" containerID="0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383" Nov 28 17:47:25 crc kubenswrapper[5024]: I1128 17:47:25.067240 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383"} err="failed to get container status \"0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383\": rpc error: code = NotFound desc = could not find container \"0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383\": container with ID starting with 0ab79c5e22ac1889c3d39de75e8e0297a373b8e3ea9cc36eb8f5ce5972451383 not found: ID does not exist" Nov 28 17:47:25 crc kubenswrapper[5024]: I1128 17:47:25.067260 5024 scope.go:117] "RemoveContainer" containerID="16cb3da4f0753b048a74dd11b329dfb9d007f9b9f979de06875ff7d853e3ac90" Nov 28 17:47:25 crc kubenswrapper[5024]: E1128 17:47:25.067550 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16cb3da4f0753b048a74dd11b329dfb9d007f9b9f979de06875ff7d853e3ac90\": container with ID starting with 16cb3da4f0753b048a74dd11b329dfb9d007f9b9f979de06875ff7d853e3ac90 not found: ID does not exist" containerID="16cb3da4f0753b048a74dd11b329dfb9d007f9b9f979de06875ff7d853e3ac90" Nov 28 17:47:25 crc kubenswrapper[5024]: I1128 17:47:25.067573 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16cb3da4f0753b048a74dd11b329dfb9d007f9b9f979de06875ff7d853e3ac90"} err="failed to get container status \"16cb3da4f0753b048a74dd11b329dfb9d007f9b9f979de06875ff7d853e3ac90\": rpc error: code = NotFound desc = could not find container \"16cb3da4f0753b048a74dd11b329dfb9d007f9b9f979de06875ff7d853e3ac90\": container with ID starting with 16cb3da4f0753b048a74dd11b329dfb9d007f9b9f979de06875ff7d853e3ac90 not found: ID does not exist" Nov 28 17:47:25 crc kubenswrapper[5024]: I1128 17:47:25.067586 5024 scope.go:117] "RemoveContainer" containerID="de6be29b6a242a5b79b88c06eac6685b4a61a28f3bc6cff5f3ab799e2ae2c4b6" Nov 28 17:47:25 crc kubenswrapper[5024]: E1128 17:47:25.067777 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de6be29b6a242a5b79b88c06eac6685b4a61a28f3bc6cff5f3ab799e2ae2c4b6\": container with ID starting with de6be29b6a242a5b79b88c06eac6685b4a61a28f3bc6cff5f3ab799e2ae2c4b6 not found: ID does not exist" containerID="de6be29b6a242a5b79b88c06eac6685b4a61a28f3bc6cff5f3ab799e2ae2c4b6" Nov 28 17:47:25 crc kubenswrapper[5024]: I1128 17:47:25.067792 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6be29b6a242a5b79b88c06eac6685b4a61a28f3bc6cff5f3ab799e2ae2c4b6"} err="failed to get container status \"de6be29b6a242a5b79b88c06eac6685b4a61a28f3bc6cff5f3ab799e2ae2c4b6\": rpc error: code = NotFound desc = could not find container \"de6be29b6a242a5b79b88c06eac6685b4a61a28f3bc6cff5f3ab799e2ae2c4b6\": container with ID starting with de6be29b6a242a5b79b88c06eac6685b4a61a28f3bc6cff5f3ab799e2ae2c4b6 not found: ID does not exist" Nov 28 17:47:26 crc kubenswrapper[5024]: I1128 17:47:26.520696 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b3f57b-b4ae-417f-886a-05258157347a" path="/var/lib/kubelet/pods/f6b3f57b-b4ae-417f-886a-05258157347a/volumes" Nov 28 17:47:37 crc kubenswrapper[5024]: I1128 17:47:37.565185 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:47:37 crc kubenswrapper[5024]: I1128 17:47:37.565767 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:47:48 crc kubenswrapper[5024]: E1128 17:47:48.262222 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98dfedf7_c96b_4029_8893_74f4abd9124b.slice/crio-76218fbc5707116373b9bdaec63e87e9c6fc3bd5706fedda479dc1695d2406de.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98dfedf7_c96b_4029_8893_74f4abd9124b.slice/crio-conmon-76218fbc5707116373b9bdaec63e87e9c6fc3bd5706fedda479dc1695d2406de.scope\": RecentStats: unable to find data in memory cache]" Nov 28 17:47:48 crc kubenswrapper[5024]: E1128 17:47:48.262312 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98dfedf7_c96b_4029_8893_74f4abd9124b.slice/crio-76218fbc5707116373b9bdaec63e87e9c6fc3bd5706fedda479dc1695d2406de.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98dfedf7_c96b_4029_8893_74f4abd9124b.slice/crio-conmon-76218fbc5707116373b9bdaec63e87e9c6fc3bd5706fedda479dc1695d2406de.scope\": RecentStats: unable to find data in memory cache]" Nov 28 17:47:49 crc kubenswrapper[5024]: I1128 17:47:49.187461 5024 generic.go:334] "Generic (PLEG): container finished" podID="98dfedf7-c96b-4029-8893-74f4abd9124b" containerID="76218fbc5707116373b9bdaec63e87e9c6fc3bd5706fedda479dc1695d2406de" exitCode=0 Nov 28 17:47:49 crc kubenswrapper[5024]: I1128 17:47:49.187569 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" event={"ID":"98dfedf7-c96b-4029-8893-74f4abd9124b","Type":"ContainerDied","Data":"76218fbc5707116373b9bdaec63e87e9c6fc3bd5706fedda479dc1695d2406de"} Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.693433 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.891972 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-migration-ssh-key-0\") pod \"98dfedf7-c96b-4029-8893-74f4abd9124b\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.892069 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-extra-config-0\") pod \"98dfedf7-c96b-4029-8893-74f4abd9124b\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.892125 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-ssh-key\") pod \"98dfedf7-c96b-4029-8893-74f4abd9124b\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.892216 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-cell1-compute-config-1\") pod \"98dfedf7-c96b-4029-8893-74f4abd9124b\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.892365 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5db4\" (UniqueName: \"kubernetes.io/projected/98dfedf7-c96b-4029-8893-74f4abd9124b-kube-api-access-f5db4\") pod \"98dfedf7-c96b-4029-8893-74f4abd9124b\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.892420 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-inventory\") pod \"98dfedf7-c96b-4029-8893-74f4abd9124b\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.892474 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-combined-ca-bundle\") pod \"98dfedf7-c96b-4029-8893-74f4abd9124b\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.892775 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-migration-ssh-key-1\") pod \"98dfedf7-c96b-4029-8893-74f4abd9124b\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.892810 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-cell1-compute-config-0\") pod \"98dfedf7-c96b-4029-8893-74f4abd9124b\" (UID: \"98dfedf7-c96b-4029-8893-74f4abd9124b\") " Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.898495 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "98dfedf7-c96b-4029-8893-74f4abd9124b" (UID: "98dfedf7-c96b-4029-8893-74f4abd9124b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.900855 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98dfedf7-c96b-4029-8893-74f4abd9124b-kube-api-access-f5db4" (OuterVolumeSpecName: "kube-api-access-f5db4") pod "98dfedf7-c96b-4029-8893-74f4abd9124b" (UID: "98dfedf7-c96b-4029-8893-74f4abd9124b"). InnerVolumeSpecName "kube-api-access-f5db4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.925160 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "98dfedf7-c96b-4029-8893-74f4abd9124b" (UID: "98dfedf7-c96b-4029-8893-74f4abd9124b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.927833 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-inventory" (OuterVolumeSpecName: "inventory") pod "98dfedf7-c96b-4029-8893-74f4abd9124b" (UID: "98dfedf7-c96b-4029-8893-74f4abd9124b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.930008 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "98dfedf7-c96b-4029-8893-74f4abd9124b" (UID: "98dfedf7-c96b-4029-8893-74f4abd9124b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.936898 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "98dfedf7-c96b-4029-8893-74f4abd9124b" (UID: "98dfedf7-c96b-4029-8893-74f4abd9124b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.946321 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "98dfedf7-c96b-4029-8893-74f4abd9124b" (UID: "98dfedf7-c96b-4029-8893-74f4abd9124b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.948848 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "98dfedf7-c96b-4029-8893-74f4abd9124b" (UID: "98dfedf7-c96b-4029-8893-74f4abd9124b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.949122 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "98dfedf7-c96b-4029-8893-74f4abd9124b" (UID: "98dfedf7-c96b-4029-8893-74f4abd9124b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.996319 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5db4\" (UniqueName: \"kubernetes.io/projected/98dfedf7-c96b-4029-8893-74f4abd9124b-kube-api-access-f5db4\") on node \"crc\" DevicePath \"\"" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.996531 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.996607 5024 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.996666 5024 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.996733 5024 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.996812 5024 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.996879 5024 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.996955 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:47:50 crc kubenswrapper[5024]: I1128 17:47:50.997040 5024 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/98dfedf7-c96b-4029-8893-74f4abd9124b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.207679 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" event={"ID":"98dfedf7-c96b-4029-8893-74f4abd9124b","Type":"ContainerDied","Data":"be4acb2593fbbb9f9c41657c6d690b344a7183f8a5bceb33847e1cb53694cc04"} Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.207724 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be4acb2593fbbb9f9c41657c6d690b344a7183f8a5bceb33847e1cb53694cc04" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.207778 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pkt6w" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.450105 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c"] Nov 28 17:47:51 crc kubenswrapper[5024]: E1128 17:47:51.450831 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dfedf7-c96b-4029-8893-74f4abd9124b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.450853 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dfedf7-c96b-4029-8893-74f4abd9124b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 17:47:51 crc kubenswrapper[5024]: E1128 17:47:51.450872 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b3f57b-b4ae-417f-886a-05258157347a" containerName="extract-utilities" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.450883 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b3f57b-b4ae-417f-886a-05258157347a" containerName="extract-utilities" Nov 28 17:47:51 crc kubenswrapper[5024]: E1128 17:47:51.450910 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b3f57b-b4ae-417f-886a-05258157347a" containerName="registry-server" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.450917 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b3f57b-b4ae-417f-886a-05258157347a" containerName="registry-server" Nov 28 17:47:51 crc kubenswrapper[5024]: E1128 17:47:51.450947 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b3f57b-b4ae-417f-886a-05258157347a" containerName="extract-content" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.450957 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b3f57b-b4ae-417f-886a-05258157347a" containerName="extract-content" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.451247 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b3f57b-b4ae-417f-886a-05258157347a" containerName="registry-server" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.451296 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="98dfedf7-c96b-4029-8893-74f4abd9124b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.452492 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.458639 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.458685 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.458639 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.458867 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.458955 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.478146 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c"] Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.613092 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.613204 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f58tj\" (UniqueName: \"kubernetes.io/projected/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-kube-api-access-f58tj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.613284 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.613396 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.613432 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.615268 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.615427 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.718401 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.719204 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.719445 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.719477 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.719560 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.719619 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f58tj\" (UniqueName: \"kubernetes.io/projected/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-kube-api-access-f58tj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.719683 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.723015 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.723077 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.723294 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.723791 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.723832 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.726900 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.744243 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f58tj\" (UniqueName: \"kubernetes.io/projected/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-kube-api-access-f58tj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:51 crc kubenswrapper[5024]: I1128 17:47:51.792176 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:47:52 crc kubenswrapper[5024]: I1128 17:47:52.356505 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c"] Nov 28 17:47:53 crc kubenswrapper[5024]: I1128 17:47:53.236003 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" event={"ID":"2b7c4fbd-b022-4a14-ae1a-18dfa307493f","Type":"ContainerStarted","Data":"40a9cbc620d73b8dba06a6e91504263150a0b2602280bbf97f85618cac2552bf"} Nov 28 17:47:54 crc kubenswrapper[5024]: I1128 17:47:54.304845 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" event={"ID":"2b7c4fbd-b022-4a14-ae1a-18dfa307493f","Type":"ContainerStarted","Data":"d5617a136002fdd0a91c5cef4cdbaaeb9df8c1d1abe9700bcc021981016f3281"} Nov 28 17:47:54 crc kubenswrapper[5024]: I1128 17:47:54.324911 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" podStartSLOduration=2.586550282 podStartE2EDuration="3.324894835s" podCreationTimestamp="2025-11-28 17:47:51 +0000 UTC" firstStartedPulling="2025-11-28 17:47:52.360189169 +0000 UTC m=+2974.409110074" lastFinishedPulling="2025-11-28 17:47:53.098533702 +0000 UTC m=+2975.147454627" observedRunningTime="2025-11-28 17:47:54.324685749 +0000 UTC m=+2976.373606654" watchObservedRunningTime="2025-11-28 17:47:54.324894835 +0000 UTC m=+2976.373815740" Nov 28 17:48:07 crc kubenswrapper[5024]: I1128 17:48:07.565486 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:48:07 crc kubenswrapper[5024]: I1128 17:48:07.566060 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:48:07 crc kubenswrapper[5024]: I1128 17:48:07.566120 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 17:48:07 crc kubenswrapper[5024]: I1128 17:48:07.567148 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:48:07 crc kubenswrapper[5024]: I1128 17:48:07.567199 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" gracePeriod=600 Nov 28 17:48:07 crc kubenswrapper[5024]: E1128 17:48:07.757886 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:48:08 crc kubenswrapper[5024]: I1128 17:48:08.474841 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" exitCode=0 Nov 28 17:48:08 crc kubenswrapper[5024]: I1128 17:48:08.475192 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d"} Nov 28 17:48:08 crc kubenswrapper[5024]: I1128 17:48:08.475237 5024 scope.go:117] "RemoveContainer" containerID="fe10743f8ef10d0cc481623f4350e242bd45d62a69009ce1616be5319adfb435" Nov 28 17:48:08 crc kubenswrapper[5024]: I1128 17:48:08.476845 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:48:08 crc kubenswrapper[5024]: E1128 17:48:08.477433 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:48:22 crc kubenswrapper[5024]: I1128 17:48:22.499196 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:48:22 crc kubenswrapper[5024]: E1128 17:48:22.500072 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:48:36 crc kubenswrapper[5024]: I1128 17:48:36.498590 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:48:36 crc kubenswrapper[5024]: E1128 17:48:36.499464 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:48:48 crc kubenswrapper[5024]: I1128 17:48:48.507913 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:48:48 crc kubenswrapper[5024]: E1128 17:48:48.508906 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:49:02 crc kubenswrapper[5024]: I1128 17:49:02.498393 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:49:02 crc kubenswrapper[5024]: E1128 17:49:02.499322 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:49:15 crc kubenswrapper[5024]: I1128 17:49:15.498115 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:49:15 crc kubenswrapper[5024]: E1128 17:49:15.498971 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:49:29 crc kubenswrapper[5024]: I1128 17:49:29.497882 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:49:29 crc kubenswrapper[5024]: E1128 17:49:29.499894 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:49:41 crc kubenswrapper[5024]: I1128 17:49:41.498057 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:49:41 crc kubenswrapper[5024]: E1128 17:49:41.498852 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:49:55 crc kubenswrapper[5024]: I1128 17:49:55.498598 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:49:55 crc kubenswrapper[5024]: E1128 17:49:55.499734 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:50:05 crc kubenswrapper[5024]: I1128 17:50:05.860297 5024 generic.go:334] "Generic (PLEG): container finished" podID="2b7c4fbd-b022-4a14-ae1a-18dfa307493f" containerID="d5617a136002fdd0a91c5cef4cdbaaeb9df8c1d1abe9700bcc021981016f3281" exitCode=0 Nov 28 17:50:05 crc kubenswrapper[5024]: I1128 17:50:05.860404 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" event={"ID":"2b7c4fbd-b022-4a14-ae1a-18dfa307493f","Type":"ContainerDied","Data":"d5617a136002fdd0a91c5cef4cdbaaeb9df8c1d1abe9700bcc021981016f3281"} Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.325597 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.432439 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-2\") pod \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.432524 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-telemetry-combined-ca-bundle\") pod \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.432570 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f58tj\" (UniqueName: \"kubernetes.io/projected/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-kube-api-access-f58tj\") pod \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.432613 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-1\") pod \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.432801 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ssh-key\") pod \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.432876 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-inventory\") pod \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.433005 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-0\") pod \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\" (UID: \"2b7c4fbd-b022-4a14-ae1a-18dfa307493f\") " Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.439088 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2b7c4fbd-b022-4a14-ae1a-18dfa307493f" (UID: "2b7c4fbd-b022-4a14-ae1a-18dfa307493f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.439658 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-kube-api-access-f58tj" (OuterVolumeSpecName: "kube-api-access-f58tj") pod "2b7c4fbd-b022-4a14-ae1a-18dfa307493f" (UID: "2b7c4fbd-b022-4a14-ae1a-18dfa307493f"). InnerVolumeSpecName "kube-api-access-f58tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.468611 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2b7c4fbd-b022-4a14-ae1a-18dfa307493f" (UID: "2b7c4fbd-b022-4a14-ae1a-18dfa307493f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.474609 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2b7c4fbd-b022-4a14-ae1a-18dfa307493f" (UID: "2b7c4fbd-b022-4a14-ae1a-18dfa307493f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.477318 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2b7c4fbd-b022-4a14-ae1a-18dfa307493f" (UID: "2b7c4fbd-b022-4a14-ae1a-18dfa307493f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.479379 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2b7c4fbd-b022-4a14-ae1a-18dfa307493f" (UID: "2b7c4fbd-b022-4a14-ae1a-18dfa307493f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.481275 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-inventory" (OuterVolumeSpecName: "inventory") pod "2b7c4fbd-b022-4a14-ae1a-18dfa307493f" (UID: "2b7c4fbd-b022-4a14-ae1a-18dfa307493f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.536247 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.536601 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.536616 5024 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.536629 5024 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.536641 5024 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.536650 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f58tj\" (UniqueName: \"kubernetes.io/projected/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-kube-api-access-f58tj\") on node \"crc\" DevicePath \"\"" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.536662 5024 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2b7c4fbd-b022-4a14-ae1a-18dfa307493f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.881348 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" event={"ID":"2b7c4fbd-b022-4a14-ae1a-18dfa307493f","Type":"ContainerDied","Data":"40a9cbc620d73b8dba06a6e91504263150a0b2602280bbf97f85618cac2552bf"} Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.881412 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40a9cbc620d73b8dba06a6e91504263150a0b2602280bbf97f85618cac2552bf" Nov 28 17:50:07 crc kubenswrapper[5024]: I1128 17:50:07.881471 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.283743 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49"] Nov 28 17:50:08 crc kubenswrapper[5024]: E1128 17:50:08.284396 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7c4fbd-b022-4a14-ae1a-18dfa307493f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.284456 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7c4fbd-b022-4a14-ae1a-18dfa307493f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.284798 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7c4fbd-b022-4a14-ae1a-18dfa307493f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.286017 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.288215 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.288756 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.288784 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.290309 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.290312 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.301915 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49"] Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.457473 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.457579 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.457654 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.457763 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.458048 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.458161 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lbqd\" (UniqueName: \"kubernetes.io/projected/68ce2acd-5232-4e99-8f05-0c0e50c1d060-kube-api-access-6lbqd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.458261 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.560760 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.560901 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lbqd\" (UniqueName: \"kubernetes.io/projected/68ce2acd-5232-4e99-8f05-0c0e50c1d060-kube-api-access-6lbqd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.560991 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.561137 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.561385 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.561478 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.562250 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.566812 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.566814 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.567155 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.567275 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.567455 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.567629 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.672561 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lbqd\" (UniqueName: \"kubernetes.io/projected/68ce2acd-5232-4e99-8f05-0c0e50c1d060-kube-api-access-6lbqd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:08 crc kubenswrapper[5024]: I1128 17:50:08.908800 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:50:09 crc kubenswrapper[5024]: I1128 17:50:09.497594 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:50:09 crc kubenswrapper[5024]: E1128 17:50:09.499507 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:50:09 crc kubenswrapper[5024]: I1128 17:50:09.612678 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49"] Nov 28 17:50:09 crc kubenswrapper[5024]: W1128 17:50:09.622274 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68ce2acd_5232_4e99_8f05_0c0e50c1d060.slice/crio-360631d722ade695378373734c62fd78d84f2ce7284a4295939e9b96e76de747 WatchSource:0}: Error finding container 360631d722ade695378373734c62fd78d84f2ce7284a4295939e9b96e76de747: Status 404 returned error can't find the container with id 360631d722ade695378373734c62fd78d84f2ce7284a4295939e9b96e76de747 Nov 28 17:50:09 crc kubenswrapper[5024]: I1128 17:50:09.626911 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:50:09 crc kubenswrapper[5024]: I1128 17:50:09.903040 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" event={"ID":"68ce2acd-5232-4e99-8f05-0c0e50c1d060","Type":"ContainerStarted","Data":"360631d722ade695378373734c62fd78d84f2ce7284a4295939e9b96e76de747"} Nov 28 17:50:10 crc kubenswrapper[5024]: I1128 17:50:10.923090 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" event={"ID":"68ce2acd-5232-4e99-8f05-0c0e50c1d060","Type":"ContainerStarted","Data":"7fbe5fc4f947a37d6dade32a434e062f5490f6b1068055a727bc37dc05bcd8c3"} Nov 28 17:50:10 crc kubenswrapper[5024]: I1128 17:50:10.955408 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" podStartSLOduration=2.218239912 podStartE2EDuration="2.955363073s" podCreationTimestamp="2025-11-28 17:50:08 +0000 UTC" firstStartedPulling="2025-11-28 17:50:09.626691079 +0000 UTC m=+3111.675611984" lastFinishedPulling="2025-11-28 17:50:10.36381424 +0000 UTC m=+3112.412735145" observedRunningTime="2025-11-28 17:50:10.945421387 +0000 UTC m=+3112.994342292" watchObservedRunningTime="2025-11-28 17:50:10.955363073 +0000 UTC m=+3113.004283978" Nov 28 17:50:24 crc kubenswrapper[5024]: I1128 17:50:24.498337 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:50:24 crc kubenswrapper[5024]: E1128 17:50:24.499206 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:50:36 crc kubenswrapper[5024]: I1128 17:50:36.499199 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:50:36 crc kubenswrapper[5024]: E1128 17:50:36.500091 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:50:49 crc kubenswrapper[5024]: I1128 17:50:49.498206 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:50:49 crc kubenswrapper[5024]: E1128 17:50:49.498907 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:51:03 crc kubenswrapper[5024]: I1128 17:51:03.498326 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:51:03 crc kubenswrapper[5024]: E1128 17:51:03.499172 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:51:17 crc kubenswrapper[5024]: I1128 17:51:17.499156 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:51:17 crc kubenswrapper[5024]: E1128 17:51:17.500110 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:51:31 crc kubenswrapper[5024]: I1128 17:51:31.497788 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:51:31 crc kubenswrapper[5024]: E1128 17:51:31.499471 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:51:44 crc kubenswrapper[5024]: I1128 17:51:44.498328 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:51:44 crc kubenswrapper[5024]: E1128 17:51:44.499155 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:51:55 crc kubenswrapper[5024]: I1128 17:51:55.163080 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" event={"ID":"68ce2acd-5232-4e99-8f05-0c0e50c1d060","Type":"ContainerDied","Data":"7fbe5fc4f947a37d6dade32a434e062f5490f6b1068055a727bc37dc05bcd8c3"} Nov 28 17:51:55 crc kubenswrapper[5024]: I1128 17:51:55.163010 5024 generic.go:334] "Generic (PLEG): container finished" podID="68ce2acd-5232-4e99-8f05-0c0e50c1d060" containerID="7fbe5fc4f947a37d6dade32a434e062f5490f6b1068055a727bc37dc05bcd8c3" exitCode=0 Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.706524 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.835619 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-inventory\") pod \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.836314 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-0\") pod \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.836370 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-1\") pod \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.836406 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ssh-key\") pod \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.836465 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-telemetry-power-monitoring-combined-ca-bundle\") pod \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.836523 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lbqd\" (UniqueName: \"kubernetes.io/projected/68ce2acd-5232-4e99-8f05-0c0e50c1d060-kube-api-access-6lbqd\") pod \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.836559 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-2\") pod \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\" (UID: \"68ce2acd-5232-4e99-8f05-0c0e50c1d060\") " Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.841501 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ce2acd-5232-4e99-8f05-0c0e50c1d060-kube-api-access-6lbqd" (OuterVolumeSpecName: "kube-api-access-6lbqd") pod "68ce2acd-5232-4e99-8f05-0c0e50c1d060" (UID: "68ce2acd-5232-4e99-8f05-0c0e50c1d060"). InnerVolumeSpecName "kube-api-access-6lbqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.843385 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "68ce2acd-5232-4e99-8f05-0c0e50c1d060" (UID: "68ce2acd-5232-4e99-8f05-0c0e50c1d060"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.868249 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "68ce2acd-5232-4e99-8f05-0c0e50c1d060" (UID: "68ce2acd-5232-4e99-8f05-0c0e50c1d060"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.875904 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "68ce2acd-5232-4e99-8f05-0c0e50c1d060" (UID: "68ce2acd-5232-4e99-8f05-0c0e50c1d060"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.879072 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-inventory" (OuterVolumeSpecName: "inventory") pod "68ce2acd-5232-4e99-8f05-0c0e50c1d060" (UID: "68ce2acd-5232-4e99-8f05-0c0e50c1d060"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.880306 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "68ce2acd-5232-4e99-8f05-0c0e50c1d060" (UID: "68ce2acd-5232-4e99-8f05-0c0e50c1d060"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.883116 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "68ce2acd-5232-4e99-8f05-0c0e50c1d060" (UID: "68ce2acd-5232-4e99-8f05-0c0e50c1d060"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.940694 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.940727 5024 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.940742 5024 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.940794 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.940809 5024 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.940821 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lbqd\" (UniqueName: \"kubernetes.io/projected/68ce2acd-5232-4e99-8f05-0c0e50c1d060-kube-api-access-6lbqd\") on node \"crc\" DevicePath \"\"" Nov 28 17:51:56 crc kubenswrapper[5024]: I1128 17:51:56.940832 5024 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/68ce2acd-5232-4e99-8f05-0c0e50c1d060-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.194310 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" event={"ID":"68ce2acd-5232-4e99-8f05-0c0e50c1d060","Type":"ContainerDied","Data":"360631d722ade695378373734c62fd78d84f2ce7284a4295939e9b96e76de747"} Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.194648 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="360631d722ade695378373734c62fd78d84f2ce7284a4295939e9b96e76de747" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.194467 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.455430 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78"] Nov 28 17:51:57 crc kubenswrapper[5024]: E1128 17:51:57.456052 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ce2acd-5232-4e99-8f05-0c0e50c1d060" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.456075 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ce2acd-5232-4e99-8f05-0c0e50c1d060" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.456452 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="68ce2acd-5232-4e99-8f05-0c0e50c1d060" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.457702 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.461082 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.461281 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wq7bc" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.461446 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.461571 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.461668 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.467072 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78"] Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.655856 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.656069 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.656112 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.656254 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.656440 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shdc7\" (UniqueName: \"kubernetes.io/projected/a6d387e7-2e04-456a-973b-d3d13b988d4b-kube-api-access-shdc7\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.758271 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shdc7\" (UniqueName: \"kubernetes.io/projected/a6d387e7-2e04-456a-973b-d3d13b988d4b-kube-api-access-shdc7\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.758366 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.758470 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.758505 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.758539 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.764361 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.768911 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.774988 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.775123 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.781290 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shdc7\" (UniqueName: \"kubernetes.io/projected/a6d387e7-2e04-456a-973b-d3d13b988d4b-kube-api-access-shdc7\") pod \"logging-edpm-deployment-openstack-edpm-ipam-7jm78\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:57 crc kubenswrapper[5024]: I1128 17:51:57.818979 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:51:58 crc kubenswrapper[5024]: I1128 17:51:58.367938 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78"] Nov 28 17:51:58 crc kubenswrapper[5024]: I1128 17:51:58.509634 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:51:58 crc kubenswrapper[5024]: E1128 17:51:58.510107 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:51:59 crc kubenswrapper[5024]: I1128 17:51:59.220072 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" event={"ID":"a6d387e7-2e04-456a-973b-d3d13b988d4b","Type":"ContainerStarted","Data":"da7258021035a1f46776decab1b32fc57f6990dbd4441c2b23291ae4dec33c0f"} Nov 28 17:52:00 crc kubenswrapper[5024]: I1128 17:52:00.234531 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" event={"ID":"a6d387e7-2e04-456a-973b-d3d13b988d4b","Type":"ContainerStarted","Data":"18e352817e77fb19fba889f15e22fe80b1cbc31dcef3426aa07fe6412ddc9f32"} Nov 28 17:52:00 crc kubenswrapper[5024]: I1128 17:52:00.255748 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" podStartSLOduration=2.691993525 podStartE2EDuration="3.255699417s" podCreationTimestamp="2025-11-28 17:51:57 +0000 UTC" firstStartedPulling="2025-11-28 17:51:58.37512474 +0000 UTC m=+3220.424045635" lastFinishedPulling="2025-11-28 17:51:58.938830622 +0000 UTC m=+3220.987751527" observedRunningTime="2025-11-28 17:52:00.251671281 +0000 UTC m=+3222.300592216" watchObservedRunningTime="2025-11-28 17:52:00.255699417 +0000 UTC m=+3222.304620332" Nov 28 17:52:09 crc kubenswrapper[5024]: I1128 17:52:09.497911 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:52:09 crc kubenswrapper[5024]: E1128 17:52:09.498817 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:52:13 crc kubenswrapper[5024]: I1128 17:52:13.382488 5024 generic.go:334] "Generic (PLEG): container finished" podID="a6d387e7-2e04-456a-973b-d3d13b988d4b" containerID="18e352817e77fb19fba889f15e22fe80b1cbc31dcef3426aa07fe6412ddc9f32" exitCode=0 Nov 28 17:52:13 crc kubenswrapper[5024]: I1128 17:52:13.382583 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" event={"ID":"a6d387e7-2e04-456a-973b-d3d13b988d4b","Type":"ContainerDied","Data":"18e352817e77fb19fba889f15e22fe80b1cbc31dcef3426aa07fe6412ddc9f32"} Nov 28 17:52:14 crc kubenswrapper[5024]: I1128 17:52:14.822959 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:52:14 crc kubenswrapper[5024]: I1128 17:52:14.916917 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-logging-compute-config-data-1\") pod \"a6d387e7-2e04-456a-973b-d3d13b988d4b\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " Nov 28 17:52:14 crc kubenswrapper[5024]: I1128 17:52:14.917140 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-inventory\") pod \"a6d387e7-2e04-456a-973b-d3d13b988d4b\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " Nov 28 17:52:14 crc kubenswrapper[5024]: I1128 17:52:14.917198 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-logging-compute-config-data-0\") pod \"a6d387e7-2e04-456a-973b-d3d13b988d4b\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " Nov 28 17:52:14 crc kubenswrapper[5024]: I1128 17:52:14.917242 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-ssh-key\") pod \"a6d387e7-2e04-456a-973b-d3d13b988d4b\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " Nov 28 17:52:14 crc kubenswrapper[5024]: I1128 17:52:14.917366 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shdc7\" (UniqueName: \"kubernetes.io/projected/a6d387e7-2e04-456a-973b-d3d13b988d4b-kube-api-access-shdc7\") pod \"a6d387e7-2e04-456a-973b-d3d13b988d4b\" (UID: \"a6d387e7-2e04-456a-973b-d3d13b988d4b\") " Nov 28 17:52:14 crc kubenswrapper[5024]: I1128 17:52:14.923102 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d387e7-2e04-456a-973b-d3d13b988d4b-kube-api-access-shdc7" (OuterVolumeSpecName: "kube-api-access-shdc7") pod "a6d387e7-2e04-456a-973b-d3d13b988d4b" (UID: "a6d387e7-2e04-456a-973b-d3d13b988d4b"). InnerVolumeSpecName "kube-api-access-shdc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:52:14 crc kubenswrapper[5024]: I1128 17:52:14.949283 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "a6d387e7-2e04-456a-973b-d3d13b988d4b" (UID: "a6d387e7-2e04-456a-973b-d3d13b988d4b"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:52:14 crc kubenswrapper[5024]: I1128 17:52:14.959143 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a6d387e7-2e04-456a-973b-d3d13b988d4b" (UID: "a6d387e7-2e04-456a-973b-d3d13b988d4b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:52:14 crc kubenswrapper[5024]: I1128 17:52:14.961481 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "a6d387e7-2e04-456a-973b-d3d13b988d4b" (UID: "a6d387e7-2e04-456a-973b-d3d13b988d4b"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:52:14 crc kubenswrapper[5024]: I1128 17:52:14.966966 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-inventory" (OuterVolumeSpecName: "inventory") pod "a6d387e7-2e04-456a-973b-d3d13b988d4b" (UID: "a6d387e7-2e04-456a-973b-d3d13b988d4b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:52:15 crc kubenswrapper[5024]: I1128 17:52:15.021374 5024 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:52:15 crc kubenswrapper[5024]: I1128 17:52:15.021426 5024 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:52:15 crc kubenswrapper[5024]: I1128 17:52:15.021443 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:52:15 crc kubenswrapper[5024]: I1128 17:52:15.021454 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shdc7\" (UniqueName: \"kubernetes.io/projected/a6d387e7-2e04-456a-973b-d3d13b988d4b-kube-api-access-shdc7\") on node \"crc\" DevicePath \"\"" Nov 28 17:52:15 crc kubenswrapper[5024]: I1128 17:52:15.021465 5024 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a6d387e7-2e04-456a-973b-d3d13b988d4b-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 28 17:52:15 crc kubenswrapper[5024]: I1128 17:52:15.407520 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" event={"ID":"a6d387e7-2e04-456a-973b-d3d13b988d4b","Type":"ContainerDied","Data":"da7258021035a1f46776decab1b32fc57f6990dbd4441c2b23291ae4dec33c0f"} Nov 28 17:52:15 crc kubenswrapper[5024]: I1128 17:52:15.407596 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da7258021035a1f46776decab1b32fc57f6990dbd4441c2b23291ae4dec33c0f" Nov 28 17:52:15 crc kubenswrapper[5024]: I1128 17:52:15.407633 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-7jm78" Nov 28 17:52:24 crc kubenswrapper[5024]: I1128 17:52:24.497485 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:52:24 crc kubenswrapper[5024]: E1128 17:52:24.498392 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:52:36 crc kubenswrapper[5024]: I1128 17:52:36.497747 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:52:36 crc kubenswrapper[5024]: E1128 17:52:36.498595 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:52:47 crc kubenswrapper[5024]: I1128 17:52:47.498584 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:52:47 crc kubenswrapper[5024]: E1128 17:52:47.499483 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:53:01 crc kubenswrapper[5024]: I1128 17:53:01.498151 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:53:01 crc kubenswrapper[5024]: E1128 17:53:01.498955 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:53:15 crc kubenswrapper[5024]: I1128 17:53:15.498690 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:53:16 crc kubenswrapper[5024]: I1128 17:53:16.193215 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"eecbdf1b23b7c67babb6c2fb4f15aa22e093798829c4c79e5fd5f5976bee3a4c"} Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.356872 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2m9xx"] Nov 28 17:54:46 crc kubenswrapper[5024]: E1128 17:54:46.357911 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d387e7-2e04-456a-973b-d3d13b988d4b" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.357927 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d387e7-2e04-456a-973b-d3d13b988d4b" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.358225 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d387e7-2e04-456a-973b-d3d13b988d4b" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.360681 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.403595 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2m9xx"] Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.505481 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d3fea6-4289-4549-af89-2788cee5aeb8-utilities\") pod \"certified-operators-2m9xx\" (UID: \"a8d3fea6-4289-4549-af89-2788cee5aeb8\") " pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.505868 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tbpr\" (UniqueName: \"kubernetes.io/projected/a8d3fea6-4289-4549-af89-2788cee5aeb8-kube-api-access-4tbpr\") pod \"certified-operators-2m9xx\" (UID: \"a8d3fea6-4289-4549-af89-2788cee5aeb8\") " pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.505895 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d3fea6-4289-4549-af89-2788cee5aeb8-catalog-content\") pod \"certified-operators-2m9xx\" (UID: \"a8d3fea6-4289-4549-af89-2788cee5aeb8\") " pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.608648 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tbpr\" (UniqueName: \"kubernetes.io/projected/a8d3fea6-4289-4549-af89-2788cee5aeb8-kube-api-access-4tbpr\") pod \"certified-operators-2m9xx\" (UID: \"a8d3fea6-4289-4549-af89-2788cee5aeb8\") " pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.608700 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d3fea6-4289-4549-af89-2788cee5aeb8-catalog-content\") pod \"certified-operators-2m9xx\" (UID: \"a8d3fea6-4289-4549-af89-2788cee5aeb8\") " pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.608789 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d3fea6-4289-4549-af89-2788cee5aeb8-utilities\") pod \"certified-operators-2m9xx\" (UID: \"a8d3fea6-4289-4549-af89-2788cee5aeb8\") " pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.609652 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d3fea6-4289-4549-af89-2788cee5aeb8-utilities\") pod \"certified-operators-2m9xx\" (UID: \"a8d3fea6-4289-4549-af89-2788cee5aeb8\") " pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.609729 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d3fea6-4289-4549-af89-2788cee5aeb8-catalog-content\") pod \"certified-operators-2m9xx\" (UID: \"a8d3fea6-4289-4549-af89-2788cee5aeb8\") " pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.633189 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tbpr\" (UniqueName: \"kubernetes.io/projected/a8d3fea6-4289-4549-af89-2788cee5aeb8-kube-api-access-4tbpr\") pod \"certified-operators-2m9xx\" (UID: \"a8d3fea6-4289-4549-af89-2788cee5aeb8\") " pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:54:46 crc kubenswrapper[5024]: I1128 17:54:46.698389 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:54:47 crc kubenswrapper[5024]: I1128 17:54:47.286879 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2m9xx"] Nov 28 17:54:48 crc kubenswrapper[5024]: I1128 17:54:48.209564 5024 generic.go:334] "Generic (PLEG): container finished" podID="a8d3fea6-4289-4549-af89-2788cee5aeb8" containerID="8428e7747c82f1eedecafdaee84dd8e3bd7a7bb86fe77c597022e3d31609f6b0" exitCode=0 Nov 28 17:54:48 crc kubenswrapper[5024]: I1128 17:54:48.209643 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2m9xx" event={"ID":"a8d3fea6-4289-4549-af89-2788cee5aeb8","Type":"ContainerDied","Data":"8428e7747c82f1eedecafdaee84dd8e3bd7a7bb86fe77c597022e3d31609f6b0"} Nov 28 17:54:48 crc kubenswrapper[5024]: I1128 17:54:48.209947 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2m9xx" event={"ID":"a8d3fea6-4289-4549-af89-2788cee5aeb8","Type":"ContainerStarted","Data":"caf9debeefd5e5165a34d95a799b64e98b9c919ea674fd21c4a9ca1e77e18cb7"} Nov 28 17:54:49 crc kubenswrapper[5024]: I1128 17:54:49.221623 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2m9xx" event={"ID":"a8d3fea6-4289-4549-af89-2788cee5aeb8","Type":"ContainerStarted","Data":"62f1fb43a63a16e9c7f5e3a433236921393665db13459a4fdac32833eb41038f"} Nov 28 17:54:50 crc kubenswrapper[5024]: I1128 17:54:50.232324 5024 generic.go:334] "Generic (PLEG): container finished" podID="a8d3fea6-4289-4549-af89-2788cee5aeb8" containerID="62f1fb43a63a16e9c7f5e3a433236921393665db13459a4fdac32833eb41038f" exitCode=0 Nov 28 17:54:50 crc kubenswrapper[5024]: I1128 17:54:50.232373 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2m9xx" event={"ID":"a8d3fea6-4289-4549-af89-2788cee5aeb8","Type":"ContainerDied","Data":"62f1fb43a63a16e9c7f5e3a433236921393665db13459a4fdac32833eb41038f"} Nov 28 17:54:51 crc kubenswrapper[5024]: I1128 17:54:51.246184 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2m9xx" event={"ID":"a8d3fea6-4289-4549-af89-2788cee5aeb8","Type":"ContainerStarted","Data":"a05262193de2aa68968eadf936542655b2120efe9be10f058edab90189b8b003"} Nov 28 17:54:51 crc kubenswrapper[5024]: I1128 17:54:51.264676 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2m9xx" podStartSLOduration=2.786368107 podStartE2EDuration="5.264657659s" podCreationTimestamp="2025-11-28 17:54:46 +0000 UTC" firstStartedPulling="2025-11-28 17:54:48.21198837 +0000 UTC m=+3390.260909275" lastFinishedPulling="2025-11-28 17:54:50.690277922 +0000 UTC m=+3392.739198827" observedRunningTime="2025-11-28 17:54:51.263903188 +0000 UTC m=+3393.312824093" watchObservedRunningTime="2025-11-28 17:54:51.264657659 +0000 UTC m=+3393.313578564" Nov 28 17:54:56 crc kubenswrapper[5024]: I1128 17:54:56.699071 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:54:56 crc kubenswrapper[5024]: I1128 17:54:56.699609 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:54:56 crc kubenswrapper[5024]: I1128 17:54:56.750149 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:54:57 crc kubenswrapper[5024]: I1128 17:54:57.367674 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:55:00 crc kubenswrapper[5024]: I1128 17:55:00.343305 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2m9xx"] Nov 28 17:55:00 crc kubenswrapper[5024]: I1128 17:55:00.344214 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2m9xx" podUID="a8d3fea6-4289-4549-af89-2788cee5aeb8" containerName="registry-server" containerID="cri-o://a05262193de2aa68968eadf936542655b2120efe9be10f058edab90189b8b003" gracePeriod=2 Nov 28 17:55:00 crc kubenswrapper[5024]: I1128 17:55:00.845092 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.002688 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tbpr\" (UniqueName: \"kubernetes.io/projected/a8d3fea6-4289-4549-af89-2788cee5aeb8-kube-api-access-4tbpr\") pod \"a8d3fea6-4289-4549-af89-2788cee5aeb8\" (UID: \"a8d3fea6-4289-4549-af89-2788cee5aeb8\") " Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.002824 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d3fea6-4289-4549-af89-2788cee5aeb8-utilities\") pod \"a8d3fea6-4289-4549-af89-2788cee5aeb8\" (UID: \"a8d3fea6-4289-4549-af89-2788cee5aeb8\") " Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.002995 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d3fea6-4289-4549-af89-2788cee5aeb8-catalog-content\") pod \"a8d3fea6-4289-4549-af89-2788cee5aeb8\" (UID: \"a8d3fea6-4289-4549-af89-2788cee5aeb8\") " Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.003869 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8d3fea6-4289-4549-af89-2788cee5aeb8-utilities" (OuterVolumeSpecName: "utilities") pod "a8d3fea6-4289-4549-af89-2788cee5aeb8" (UID: "a8d3fea6-4289-4549-af89-2788cee5aeb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.027293 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8d3fea6-4289-4549-af89-2788cee5aeb8-kube-api-access-4tbpr" (OuterVolumeSpecName: "kube-api-access-4tbpr") pod "a8d3fea6-4289-4549-af89-2788cee5aeb8" (UID: "a8d3fea6-4289-4549-af89-2788cee5aeb8"). InnerVolumeSpecName "kube-api-access-4tbpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.070121 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8d3fea6-4289-4549-af89-2788cee5aeb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8d3fea6-4289-4549-af89-2788cee5aeb8" (UID: "a8d3fea6-4289-4549-af89-2788cee5aeb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.106974 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tbpr\" (UniqueName: \"kubernetes.io/projected/a8d3fea6-4289-4549-af89-2788cee5aeb8-kube-api-access-4tbpr\") on node \"crc\" DevicePath \"\"" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.107005 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d3fea6-4289-4549-af89-2788cee5aeb8-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.107036 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d3fea6-4289-4549-af89-2788cee5aeb8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.376432 5024 generic.go:334] "Generic (PLEG): container finished" podID="a8d3fea6-4289-4549-af89-2788cee5aeb8" containerID="a05262193de2aa68968eadf936542655b2120efe9be10f058edab90189b8b003" exitCode=0 Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.376778 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2m9xx" event={"ID":"a8d3fea6-4289-4549-af89-2788cee5aeb8","Type":"ContainerDied","Data":"a05262193de2aa68968eadf936542655b2120efe9be10f058edab90189b8b003"} Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.376823 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2m9xx" event={"ID":"a8d3fea6-4289-4549-af89-2788cee5aeb8","Type":"ContainerDied","Data":"caf9debeefd5e5165a34d95a799b64e98b9c919ea674fd21c4a9ca1e77e18cb7"} Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.376863 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2m9xx" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.376871 5024 scope.go:117] "RemoveContainer" containerID="a05262193de2aa68968eadf936542655b2120efe9be10f058edab90189b8b003" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.437995 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2m9xx"] Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.442963 5024 scope.go:117] "RemoveContainer" containerID="62f1fb43a63a16e9c7f5e3a433236921393665db13459a4fdac32833eb41038f" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.448872 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2m9xx"] Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.484385 5024 scope.go:117] "RemoveContainer" containerID="8428e7747c82f1eedecafdaee84dd8e3bd7a7bb86fe77c597022e3d31609f6b0" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.530178 5024 scope.go:117] "RemoveContainer" containerID="a05262193de2aa68968eadf936542655b2120efe9be10f058edab90189b8b003" Nov 28 17:55:01 crc kubenswrapper[5024]: E1128 17:55:01.533357 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a05262193de2aa68968eadf936542655b2120efe9be10f058edab90189b8b003\": container with ID starting with a05262193de2aa68968eadf936542655b2120efe9be10f058edab90189b8b003 not found: ID does not exist" containerID="a05262193de2aa68968eadf936542655b2120efe9be10f058edab90189b8b003" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.533403 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a05262193de2aa68968eadf936542655b2120efe9be10f058edab90189b8b003"} err="failed to get container status \"a05262193de2aa68968eadf936542655b2120efe9be10f058edab90189b8b003\": rpc error: code = NotFound desc = could not find container \"a05262193de2aa68968eadf936542655b2120efe9be10f058edab90189b8b003\": container with ID starting with a05262193de2aa68968eadf936542655b2120efe9be10f058edab90189b8b003 not found: ID does not exist" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.533432 5024 scope.go:117] "RemoveContainer" containerID="62f1fb43a63a16e9c7f5e3a433236921393665db13459a4fdac32833eb41038f" Nov 28 17:55:01 crc kubenswrapper[5024]: E1128 17:55:01.533830 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62f1fb43a63a16e9c7f5e3a433236921393665db13459a4fdac32833eb41038f\": container with ID starting with 62f1fb43a63a16e9c7f5e3a433236921393665db13459a4fdac32833eb41038f not found: ID does not exist" containerID="62f1fb43a63a16e9c7f5e3a433236921393665db13459a4fdac32833eb41038f" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.534112 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f1fb43a63a16e9c7f5e3a433236921393665db13459a4fdac32833eb41038f"} err="failed to get container status \"62f1fb43a63a16e9c7f5e3a433236921393665db13459a4fdac32833eb41038f\": rpc error: code = NotFound desc = could not find container \"62f1fb43a63a16e9c7f5e3a433236921393665db13459a4fdac32833eb41038f\": container with ID starting with 62f1fb43a63a16e9c7f5e3a433236921393665db13459a4fdac32833eb41038f not found: ID does not exist" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.534238 5024 scope.go:117] "RemoveContainer" containerID="8428e7747c82f1eedecafdaee84dd8e3bd7a7bb86fe77c597022e3d31609f6b0" Nov 28 17:55:01 crc kubenswrapper[5024]: E1128 17:55:01.536390 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8428e7747c82f1eedecafdaee84dd8e3bd7a7bb86fe77c597022e3d31609f6b0\": container with ID starting with 8428e7747c82f1eedecafdaee84dd8e3bd7a7bb86fe77c597022e3d31609f6b0 not found: ID does not exist" containerID="8428e7747c82f1eedecafdaee84dd8e3bd7a7bb86fe77c597022e3d31609f6b0" Nov 28 17:55:01 crc kubenswrapper[5024]: I1128 17:55:01.536438 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8428e7747c82f1eedecafdaee84dd8e3bd7a7bb86fe77c597022e3d31609f6b0"} err="failed to get container status \"8428e7747c82f1eedecafdaee84dd8e3bd7a7bb86fe77c597022e3d31609f6b0\": rpc error: code = NotFound desc = could not find container \"8428e7747c82f1eedecafdaee84dd8e3bd7a7bb86fe77c597022e3d31609f6b0\": container with ID starting with 8428e7747c82f1eedecafdaee84dd8e3bd7a7bb86fe77c597022e3d31609f6b0 not found: ID does not exist" Nov 28 17:55:02 crc kubenswrapper[5024]: I1128 17:55:02.516378 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8d3fea6-4289-4549-af89-2788cee5aeb8" path="/var/lib/kubelet/pods/a8d3fea6-4289-4549-af89-2788cee5aeb8/volumes" Nov 28 17:55:37 crc kubenswrapper[5024]: I1128 17:55:37.564396 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:55:37 crc kubenswrapper[5024]: I1128 17:55:37.564943 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:56:07 crc kubenswrapper[5024]: I1128 17:56:07.564614 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:56:07 crc kubenswrapper[5024]: I1128 17:56:07.565196 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:56:37 crc kubenswrapper[5024]: I1128 17:56:37.564992 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:56:37 crc kubenswrapper[5024]: I1128 17:56:37.565665 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:56:37 crc kubenswrapper[5024]: I1128 17:56:37.565734 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 17:56:37 crc kubenswrapper[5024]: I1128 17:56:37.568006 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eecbdf1b23b7c67babb6c2fb4f15aa22e093798829c4c79e5fd5f5976bee3a4c"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:56:37 crc kubenswrapper[5024]: I1128 17:56:37.568200 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://eecbdf1b23b7c67babb6c2fb4f15aa22e093798829c4c79e5fd5f5976bee3a4c" gracePeriod=600 Nov 28 17:56:38 crc kubenswrapper[5024]: I1128 17:56:38.657573 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="eecbdf1b23b7c67babb6c2fb4f15aa22e093798829c4c79e5fd5f5976bee3a4c" exitCode=0 Nov 28 17:56:38 crc kubenswrapper[5024]: I1128 17:56:38.657670 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"eecbdf1b23b7c67babb6c2fb4f15aa22e093798829c4c79e5fd5f5976bee3a4c"} Nov 28 17:56:38 crc kubenswrapper[5024]: I1128 17:56:38.658153 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24"} Nov 28 17:56:38 crc kubenswrapper[5024]: I1128 17:56:38.658196 5024 scope.go:117] "RemoveContainer" containerID="330d4ddc4a8375f4a169b70978e9d1482df01a80994e1bbacc5dcc4eb985db4d" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.013226 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9pblk"] Nov 28 17:56:55 crc kubenswrapper[5024]: E1128 17:56:55.014469 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d3fea6-4289-4549-af89-2788cee5aeb8" containerName="extract-utilities" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.014486 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d3fea6-4289-4549-af89-2788cee5aeb8" containerName="extract-utilities" Nov 28 17:56:55 crc kubenswrapper[5024]: E1128 17:56:55.014519 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d3fea6-4289-4549-af89-2788cee5aeb8" containerName="registry-server" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.014527 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d3fea6-4289-4549-af89-2788cee5aeb8" containerName="registry-server" Nov 28 17:56:55 crc kubenswrapper[5024]: E1128 17:56:55.014571 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d3fea6-4289-4549-af89-2788cee5aeb8" containerName="extract-content" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.014579 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d3fea6-4289-4549-af89-2788cee5aeb8" containerName="extract-content" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.014922 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8d3fea6-4289-4549-af89-2788cee5aeb8" containerName="registry-server" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.017342 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.026839 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9pblk"] Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.151787 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvc64\" (UniqueName: \"kubernetes.io/projected/977081e3-a994-4991-8c50-fcb9d4d618d1-kube-api-access-xvc64\") pod \"redhat-operators-9pblk\" (UID: \"977081e3-a994-4991-8c50-fcb9d4d618d1\") " pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.151890 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/977081e3-a994-4991-8c50-fcb9d4d618d1-utilities\") pod \"redhat-operators-9pblk\" (UID: \"977081e3-a994-4991-8c50-fcb9d4d618d1\") " pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.151922 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/977081e3-a994-4991-8c50-fcb9d4d618d1-catalog-content\") pod \"redhat-operators-9pblk\" (UID: \"977081e3-a994-4991-8c50-fcb9d4d618d1\") " pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.254051 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvc64\" (UniqueName: \"kubernetes.io/projected/977081e3-a994-4991-8c50-fcb9d4d618d1-kube-api-access-xvc64\") pod \"redhat-operators-9pblk\" (UID: \"977081e3-a994-4991-8c50-fcb9d4d618d1\") " pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.254176 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/977081e3-a994-4991-8c50-fcb9d4d618d1-utilities\") pod \"redhat-operators-9pblk\" (UID: \"977081e3-a994-4991-8c50-fcb9d4d618d1\") " pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.254211 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/977081e3-a994-4991-8c50-fcb9d4d618d1-catalog-content\") pod \"redhat-operators-9pblk\" (UID: \"977081e3-a994-4991-8c50-fcb9d4d618d1\") " pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.254844 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/977081e3-a994-4991-8c50-fcb9d4d618d1-utilities\") pod \"redhat-operators-9pblk\" (UID: \"977081e3-a994-4991-8c50-fcb9d4d618d1\") " pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.254907 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/977081e3-a994-4991-8c50-fcb9d4d618d1-catalog-content\") pod \"redhat-operators-9pblk\" (UID: \"977081e3-a994-4991-8c50-fcb9d4d618d1\") " pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.274521 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvc64\" (UniqueName: \"kubernetes.io/projected/977081e3-a994-4991-8c50-fcb9d4d618d1-kube-api-access-xvc64\") pod \"redhat-operators-9pblk\" (UID: \"977081e3-a994-4991-8c50-fcb9d4d618d1\") " pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.357493 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:56:55 crc kubenswrapper[5024]: I1128 17:56:55.943818 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9pblk"] Nov 28 17:56:56 crc kubenswrapper[5024]: I1128 17:56:56.953203 5024 generic.go:334] "Generic (PLEG): container finished" podID="977081e3-a994-4991-8c50-fcb9d4d618d1" containerID="6bc97e907eb6e634b18196b4e2a177d02b7e221aec44d83ac736e37eea1f674e" exitCode=0 Nov 28 17:56:56 crc kubenswrapper[5024]: I1128 17:56:56.953698 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9pblk" event={"ID":"977081e3-a994-4991-8c50-fcb9d4d618d1","Type":"ContainerDied","Data":"6bc97e907eb6e634b18196b4e2a177d02b7e221aec44d83ac736e37eea1f674e"} Nov 28 17:56:56 crc kubenswrapper[5024]: I1128 17:56:56.953725 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9pblk" event={"ID":"977081e3-a994-4991-8c50-fcb9d4d618d1","Type":"ContainerStarted","Data":"134cf414d712d097cdce49aa0e2fe5633d078ccd8cf548cf6a8c3c1e54c1cdd4"} Nov 28 17:56:56 crc kubenswrapper[5024]: I1128 17:56:56.960128 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:56:57 crc kubenswrapper[5024]: I1128 17:56:57.968451 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9pblk" event={"ID":"977081e3-a994-4991-8c50-fcb9d4d618d1","Type":"ContainerStarted","Data":"6d94e37e04d7b365e274f0ce517bf604f749ea090548b5a4d46e025382e67672"} Nov 28 17:57:01 crc kubenswrapper[5024]: I1128 17:57:01.030928 5024 generic.go:334] "Generic (PLEG): container finished" podID="977081e3-a994-4991-8c50-fcb9d4d618d1" containerID="6d94e37e04d7b365e274f0ce517bf604f749ea090548b5a4d46e025382e67672" exitCode=0 Nov 28 17:57:01 crc kubenswrapper[5024]: I1128 17:57:01.031507 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9pblk" event={"ID":"977081e3-a994-4991-8c50-fcb9d4d618d1","Type":"ContainerDied","Data":"6d94e37e04d7b365e274f0ce517bf604f749ea090548b5a4d46e025382e67672"} Nov 28 17:57:02 crc kubenswrapper[5024]: I1128 17:57:02.043049 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9pblk" event={"ID":"977081e3-a994-4991-8c50-fcb9d4d618d1","Type":"ContainerStarted","Data":"0cabd9e8bac179d8effcddd99a3687600a210687ad149ab230d4ccfc5df2d0ab"} Nov 28 17:57:02 crc kubenswrapper[5024]: I1128 17:57:02.071484 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9pblk" podStartSLOduration=3.437936736 podStartE2EDuration="8.071442162s" podCreationTimestamp="2025-11-28 17:56:54 +0000 UTC" firstStartedPulling="2025-11-28 17:56:56.959824822 +0000 UTC m=+3519.008745727" lastFinishedPulling="2025-11-28 17:57:01.593330248 +0000 UTC m=+3523.642251153" observedRunningTime="2025-11-28 17:57:02.062428917 +0000 UTC m=+3524.111349822" watchObservedRunningTime="2025-11-28 17:57:02.071442162 +0000 UTC m=+3524.120363067" Nov 28 17:57:05 crc kubenswrapper[5024]: I1128 17:57:05.358545 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:57:05 crc kubenswrapper[5024]: I1128 17:57:05.359232 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:57:06 crc kubenswrapper[5024]: I1128 17:57:06.423542 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9pblk" podUID="977081e3-a994-4991-8c50-fcb9d4d618d1" containerName="registry-server" probeResult="failure" output=< Nov 28 17:57:06 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 17:57:06 crc kubenswrapper[5024]: > Nov 28 17:57:15 crc kubenswrapper[5024]: I1128 17:57:15.411521 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:57:15 crc kubenswrapper[5024]: I1128 17:57:15.466805 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:57:15 crc kubenswrapper[5024]: I1128 17:57:15.666817 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9pblk"] Nov 28 17:57:17 crc kubenswrapper[5024]: I1128 17:57:17.209993 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9pblk" podUID="977081e3-a994-4991-8c50-fcb9d4d618d1" containerName="registry-server" containerID="cri-o://0cabd9e8bac179d8effcddd99a3687600a210687ad149ab230d4ccfc5df2d0ab" gracePeriod=2 Nov 28 17:57:17 crc kubenswrapper[5024]: I1128 17:57:17.777299 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:57:17 crc kubenswrapper[5024]: I1128 17:57:17.900701 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/977081e3-a994-4991-8c50-fcb9d4d618d1-catalog-content\") pod \"977081e3-a994-4991-8c50-fcb9d4d618d1\" (UID: \"977081e3-a994-4991-8c50-fcb9d4d618d1\") " Nov 28 17:57:17 crc kubenswrapper[5024]: I1128 17:57:17.900858 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvc64\" (UniqueName: \"kubernetes.io/projected/977081e3-a994-4991-8c50-fcb9d4d618d1-kube-api-access-xvc64\") pod \"977081e3-a994-4991-8c50-fcb9d4d618d1\" (UID: \"977081e3-a994-4991-8c50-fcb9d4d618d1\") " Nov 28 17:57:17 crc kubenswrapper[5024]: I1128 17:57:17.901122 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/977081e3-a994-4991-8c50-fcb9d4d618d1-utilities\") pod \"977081e3-a994-4991-8c50-fcb9d4d618d1\" (UID: \"977081e3-a994-4991-8c50-fcb9d4d618d1\") " Nov 28 17:57:17 crc kubenswrapper[5024]: I1128 17:57:17.902174 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/977081e3-a994-4991-8c50-fcb9d4d618d1-utilities" (OuterVolumeSpecName: "utilities") pod "977081e3-a994-4991-8c50-fcb9d4d618d1" (UID: "977081e3-a994-4991-8c50-fcb9d4d618d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:57:17 crc kubenswrapper[5024]: I1128 17:57:17.907300 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977081e3-a994-4991-8c50-fcb9d4d618d1-kube-api-access-xvc64" (OuterVolumeSpecName: "kube-api-access-xvc64") pod "977081e3-a994-4991-8c50-fcb9d4d618d1" (UID: "977081e3-a994-4991-8c50-fcb9d4d618d1"). InnerVolumeSpecName "kube-api-access-xvc64". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.003761 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/977081e3-a994-4991-8c50-fcb9d4d618d1-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.003789 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvc64\" (UniqueName: \"kubernetes.io/projected/977081e3-a994-4991-8c50-fcb9d4d618d1-kube-api-access-xvc64\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.006116 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/977081e3-a994-4991-8c50-fcb9d4d618d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "977081e3-a994-4991-8c50-fcb9d4d618d1" (UID: "977081e3-a994-4991-8c50-fcb9d4d618d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.106455 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/977081e3-a994-4991-8c50-fcb9d4d618d1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.236034 5024 generic.go:334] "Generic (PLEG): container finished" podID="977081e3-a994-4991-8c50-fcb9d4d618d1" containerID="0cabd9e8bac179d8effcddd99a3687600a210687ad149ab230d4ccfc5df2d0ab" exitCode=0 Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.236120 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9pblk" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.236121 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9pblk" event={"ID":"977081e3-a994-4991-8c50-fcb9d4d618d1","Type":"ContainerDied","Data":"0cabd9e8bac179d8effcddd99a3687600a210687ad149ab230d4ccfc5df2d0ab"} Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.237407 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9pblk" event={"ID":"977081e3-a994-4991-8c50-fcb9d4d618d1","Type":"ContainerDied","Data":"134cf414d712d097cdce49aa0e2fe5633d078ccd8cf548cf6a8c3c1e54c1cdd4"} Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.237440 5024 scope.go:117] "RemoveContainer" containerID="0cabd9e8bac179d8effcddd99a3687600a210687ad149ab230d4ccfc5df2d0ab" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.273286 5024 scope.go:117] "RemoveContainer" containerID="6d94e37e04d7b365e274f0ce517bf604f749ea090548b5a4d46e025382e67672" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.282581 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9pblk"] Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.294623 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9pblk"] Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.306285 5024 scope.go:117] "RemoveContainer" containerID="6bc97e907eb6e634b18196b4e2a177d02b7e221aec44d83ac736e37eea1f674e" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.356410 5024 scope.go:117] "RemoveContainer" containerID="0cabd9e8bac179d8effcddd99a3687600a210687ad149ab230d4ccfc5df2d0ab" Nov 28 17:57:18 crc kubenswrapper[5024]: E1128 17:57:18.356961 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cabd9e8bac179d8effcddd99a3687600a210687ad149ab230d4ccfc5df2d0ab\": container with ID starting with 0cabd9e8bac179d8effcddd99a3687600a210687ad149ab230d4ccfc5df2d0ab not found: ID does not exist" containerID="0cabd9e8bac179d8effcddd99a3687600a210687ad149ab230d4ccfc5df2d0ab" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.357001 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cabd9e8bac179d8effcddd99a3687600a210687ad149ab230d4ccfc5df2d0ab"} err="failed to get container status \"0cabd9e8bac179d8effcddd99a3687600a210687ad149ab230d4ccfc5df2d0ab\": rpc error: code = NotFound desc = could not find container \"0cabd9e8bac179d8effcddd99a3687600a210687ad149ab230d4ccfc5df2d0ab\": container with ID starting with 0cabd9e8bac179d8effcddd99a3687600a210687ad149ab230d4ccfc5df2d0ab not found: ID does not exist" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.357173 5024 scope.go:117] "RemoveContainer" containerID="6d94e37e04d7b365e274f0ce517bf604f749ea090548b5a4d46e025382e67672" Nov 28 17:57:18 crc kubenswrapper[5024]: E1128 17:57:18.357965 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d94e37e04d7b365e274f0ce517bf604f749ea090548b5a4d46e025382e67672\": container with ID starting with 6d94e37e04d7b365e274f0ce517bf604f749ea090548b5a4d46e025382e67672 not found: ID does not exist" containerID="6d94e37e04d7b365e274f0ce517bf604f749ea090548b5a4d46e025382e67672" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.358011 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d94e37e04d7b365e274f0ce517bf604f749ea090548b5a4d46e025382e67672"} err="failed to get container status \"6d94e37e04d7b365e274f0ce517bf604f749ea090548b5a4d46e025382e67672\": rpc error: code = NotFound desc = could not find container \"6d94e37e04d7b365e274f0ce517bf604f749ea090548b5a4d46e025382e67672\": container with ID starting with 6d94e37e04d7b365e274f0ce517bf604f749ea090548b5a4d46e025382e67672 not found: ID does not exist" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.358048 5024 scope.go:117] "RemoveContainer" containerID="6bc97e907eb6e634b18196b4e2a177d02b7e221aec44d83ac736e37eea1f674e" Nov 28 17:57:18 crc kubenswrapper[5024]: E1128 17:57:18.358388 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc97e907eb6e634b18196b4e2a177d02b7e221aec44d83ac736e37eea1f674e\": container with ID starting with 6bc97e907eb6e634b18196b4e2a177d02b7e221aec44d83ac736e37eea1f674e not found: ID does not exist" containerID="6bc97e907eb6e634b18196b4e2a177d02b7e221aec44d83ac736e37eea1f674e" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.358421 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc97e907eb6e634b18196b4e2a177d02b7e221aec44d83ac736e37eea1f674e"} err="failed to get container status \"6bc97e907eb6e634b18196b4e2a177d02b7e221aec44d83ac736e37eea1f674e\": rpc error: code = NotFound desc = could not find container \"6bc97e907eb6e634b18196b4e2a177d02b7e221aec44d83ac736e37eea1f674e\": container with ID starting with 6bc97e907eb6e634b18196b4e2a177d02b7e221aec44d83ac736e37eea1f674e not found: ID does not exist" Nov 28 17:57:18 crc kubenswrapper[5024]: I1128 17:57:18.515133 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="977081e3-a994-4991-8c50-fcb9d4d618d1" path="/var/lib/kubelet/pods/977081e3-a994-4991-8c50-fcb9d4d618d1/volumes" Nov 28 17:57:39 crc kubenswrapper[5024]: I1128 17:57:39.930531 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-szzsn"] Nov 28 17:57:39 crc kubenswrapper[5024]: E1128 17:57:39.931863 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977081e3-a994-4991-8c50-fcb9d4d618d1" containerName="extract-content" Nov 28 17:57:39 crc kubenswrapper[5024]: I1128 17:57:39.931987 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="977081e3-a994-4991-8c50-fcb9d4d618d1" containerName="extract-content" Nov 28 17:57:39 crc kubenswrapper[5024]: E1128 17:57:39.932035 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977081e3-a994-4991-8c50-fcb9d4d618d1" containerName="extract-utilities" Nov 28 17:57:39 crc kubenswrapper[5024]: I1128 17:57:39.932044 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="977081e3-a994-4991-8c50-fcb9d4d618d1" containerName="extract-utilities" Nov 28 17:57:39 crc kubenswrapper[5024]: E1128 17:57:39.932086 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977081e3-a994-4991-8c50-fcb9d4d618d1" containerName="registry-server" Nov 28 17:57:39 crc kubenswrapper[5024]: I1128 17:57:39.932096 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="977081e3-a994-4991-8c50-fcb9d4d618d1" containerName="registry-server" Nov 28 17:57:39 crc kubenswrapper[5024]: I1128 17:57:39.932373 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="977081e3-a994-4991-8c50-fcb9d4d618d1" containerName="registry-server" Nov 28 17:57:39 crc kubenswrapper[5024]: I1128 17:57:39.934426 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:39 crc kubenswrapper[5024]: I1128 17:57:39.946409 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-szzsn"] Nov 28 17:57:39 crc kubenswrapper[5024]: I1128 17:57:39.980099 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-utilities\") pod \"redhat-marketplace-szzsn\" (UID: \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\") " pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:39 crc kubenswrapper[5024]: I1128 17:57:39.980169 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-catalog-content\") pod \"redhat-marketplace-szzsn\" (UID: \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\") " pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:39 crc kubenswrapper[5024]: I1128 17:57:39.980405 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp87c\" (UniqueName: \"kubernetes.io/projected/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-kube-api-access-dp87c\") pod \"redhat-marketplace-szzsn\" (UID: \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\") " pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:40 crc kubenswrapper[5024]: I1128 17:57:40.082348 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp87c\" (UniqueName: \"kubernetes.io/projected/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-kube-api-access-dp87c\") pod \"redhat-marketplace-szzsn\" (UID: \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\") " pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:40 crc kubenswrapper[5024]: I1128 17:57:40.082700 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-utilities\") pod \"redhat-marketplace-szzsn\" (UID: \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\") " pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:40 crc kubenswrapper[5024]: I1128 17:57:40.082794 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-catalog-content\") pod \"redhat-marketplace-szzsn\" (UID: \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\") " pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:40 crc kubenswrapper[5024]: I1128 17:57:40.083678 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-catalog-content\") pod \"redhat-marketplace-szzsn\" (UID: \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\") " pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:40 crc kubenswrapper[5024]: I1128 17:57:40.083681 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-utilities\") pod \"redhat-marketplace-szzsn\" (UID: \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\") " pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:40 crc kubenswrapper[5024]: I1128 17:57:40.104120 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp87c\" (UniqueName: \"kubernetes.io/projected/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-kube-api-access-dp87c\") pod \"redhat-marketplace-szzsn\" (UID: \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\") " pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:40 crc kubenswrapper[5024]: I1128 17:57:40.304682 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:40 crc kubenswrapper[5024]: I1128 17:57:40.807701 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-szzsn"] Nov 28 17:57:41 crc kubenswrapper[5024]: I1128 17:57:41.533337 5024 generic.go:334] "Generic (PLEG): container finished" podID="cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" containerID="5bee8f0676d4337015c06314cc7f2ba5e7f91fcc4055cc91ebc18593e771e34b" exitCode=0 Nov 28 17:57:41 crc kubenswrapper[5024]: I1128 17:57:41.533579 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szzsn" event={"ID":"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6","Type":"ContainerDied","Data":"5bee8f0676d4337015c06314cc7f2ba5e7f91fcc4055cc91ebc18593e771e34b"} Nov 28 17:57:41 crc kubenswrapper[5024]: I1128 17:57:41.533944 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szzsn" event={"ID":"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6","Type":"ContainerStarted","Data":"f77f340c2d57a7769c92d5e9970a7807ee53447550d97846d50c176802efdb16"} Nov 28 17:57:42 crc kubenswrapper[5024]: I1128 17:57:42.339460 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tdfsm"] Nov 28 17:57:42 crc kubenswrapper[5024]: I1128 17:57:42.342697 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:42 crc kubenswrapper[5024]: I1128 17:57:42.355102 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tdfsm"] Nov 28 17:57:42 crc kubenswrapper[5024]: I1128 17:57:42.442850 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78bn5\" (UniqueName: \"kubernetes.io/projected/2c978fd1-2f49-402e-97a6-f79162bb8e91-kube-api-access-78bn5\") pod \"community-operators-tdfsm\" (UID: \"2c978fd1-2f49-402e-97a6-f79162bb8e91\") " pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:42 crc kubenswrapper[5024]: I1128 17:57:42.442948 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c978fd1-2f49-402e-97a6-f79162bb8e91-catalog-content\") pod \"community-operators-tdfsm\" (UID: \"2c978fd1-2f49-402e-97a6-f79162bb8e91\") " pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:42 crc kubenswrapper[5024]: I1128 17:57:42.443051 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c978fd1-2f49-402e-97a6-f79162bb8e91-utilities\") pod \"community-operators-tdfsm\" (UID: \"2c978fd1-2f49-402e-97a6-f79162bb8e91\") " pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:42 crc kubenswrapper[5024]: I1128 17:57:42.545471 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78bn5\" (UniqueName: \"kubernetes.io/projected/2c978fd1-2f49-402e-97a6-f79162bb8e91-kube-api-access-78bn5\") pod \"community-operators-tdfsm\" (UID: \"2c978fd1-2f49-402e-97a6-f79162bb8e91\") " pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:42 crc kubenswrapper[5024]: I1128 17:57:42.545595 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c978fd1-2f49-402e-97a6-f79162bb8e91-catalog-content\") pod \"community-operators-tdfsm\" (UID: \"2c978fd1-2f49-402e-97a6-f79162bb8e91\") " pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:42 crc kubenswrapper[5024]: I1128 17:57:42.545706 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c978fd1-2f49-402e-97a6-f79162bb8e91-utilities\") pod \"community-operators-tdfsm\" (UID: \"2c978fd1-2f49-402e-97a6-f79162bb8e91\") " pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:42 crc kubenswrapper[5024]: I1128 17:57:42.546964 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c978fd1-2f49-402e-97a6-f79162bb8e91-utilities\") pod \"community-operators-tdfsm\" (UID: \"2c978fd1-2f49-402e-97a6-f79162bb8e91\") " pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:42 crc kubenswrapper[5024]: I1128 17:57:42.547106 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c978fd1-2f49-402e-97a6-f79162bb8e91-catalog-content\") pod \"community-operators-tdfsm\" (UID: \"2c978fd1-2f49-402e-97a6-f79162bb8e91\") " pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:42 crc kubenswrapper[5024]: I1128 17:57:42.548206 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szzsn" event={"ID":"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6","Type":"ContainerStarted","Data":"ae62d021701aa0bd805e71c07eee2e320c2d1bc468e1ede1973a5428426c38aa"} Nov 28 17:57:42 crc kubenswrapper[5024]: I1128 17:57:42.577822 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78bn5\" (UniqueName: \"kubernetes.io/projected/2c978fd1-2f49-402e-97a6-f79162bb8e91-kube-api-access-78bn5\") pod \"community-operators-tdfsm\" (UID: \"2c978fd1-2f49-402e-97a6-f79162bb8e91\") " pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:42 crc kubenswrapper[5024]: I1128 17:57:42.663417 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:43 crc kubenswrapper[5024]: I1128 17:57:43.212696 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tdfsm"] Nov 28 17:57:43 crc kubenswrapper[5024]: W1128 17:57:43.216285 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c978fd1_2f49_402e_97a6_f79162bb8e91.slice/crio-c2d1f63de4791e139522496080e13c2ddc973fc0b87f041f11e3a799f3eaaa8d WatchSource:0}: Error finding container c2d1f63de4791e139522496080e13c2ddc973fc0b87f041f11e3a799f3eaaa8d: Status 404 returned error can't find the container with id c2d1f63de4791e139522496080e13c2ddc973fc0b87f041f11e3a799f3eaaa8d Nov 28 17:57:43 crc kubenswrapper[5024]: I1128 17:57:43.564127 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdfsm" event={"ID":"2c978fd1-2f49-402e-97a6-f79162bb8e91","Type":"ContainerStarted","Data":"134683f5282470525ecbd124731fc4cf36ae5e83bee06d112f57e39f3774f174"} Nov 28 17:57:43 crc kubenswrapper[5024]: I1128 17:57:43.564181 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdfsm" event={"ID":"2c978fd1-2f49-402e-97a6-f79162bb8e91","Type":"ContainerStarted","Data":"c2d1f63de4791e139522496080e13c2ddc973fc0b87f041f11e3a799f3eaaa8d"} Nov 28 17:57:43 crc kubenswrapper[5024]: I1128 17:57:43.567648 5024 generic.go:334] "Generic (PLEG): container finished" podID="cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" containerID="ae62d021701aa0bd805e71c07eee2e320c2d1bc468e1ede1973a5428426c38aa" exitCode=0 Nov 28 17:57:43 crc kubenswrapper[5024]: I1128 17:57:43.567709 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szzsn" event={"ID":"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6","Type":"ContainerDied","Data":"ae62d021701aa0bd805e71c07eee2e320c2d1bc468e1ede1973a5428426c38aa"} Nov 28 17:57:44 crc kubenswrapper[5024]: I1128 17:57:44.583996 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szzsn" event={"ID":"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6","Type":"ContainerStarted","Data":"58651abe9a2ca23d5031da180566335ead07862505ed92038615971295fcb246"} Nov 28 17:57:44 crc kubenswrapper[5024]: I1128 17:57:44.588716 5024 generic.go:334] "Generic (PLEG): container finished" podID="2c978fd1-2f49-402e-97a6-f79162bb8e91" containerID="134683f5282470525ecbd124731fc4cf36ae5e83bee06d112f57e39f3774f174" exitCode=0 Nov 28 17:57:44 crc kubenswrapper[5024]: I1128 17:57:44.588795 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdfsm" event={"ID":"2c978fd1-2f49-402e-97a6-f79162bb8e91","Type":"ContainerDied","Data":"134683f5282470525ecbd124731fc4cf36ae5e83bee06d112f57e39f3774f174"} Nov 28 17:57:44 crc kubenswrapper[5024]: I1128 17:57:44.607726 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-szzsn" podStartSLOduration=3.112170481 podStartE2EDuration="5.607702262s" podCreationTimestamp="2025-11-28 17:57:39 +0000 UTC" firstStartedPulling="2025-11-28 17:57:41.536118727 +0000 UTC m=+3563.585039642" lastFinishedPulling="2025-11-28 17:57:44.031650508 +0000 UTC m=+3566.080571423" observedRunningTime="2025-11-28 17:57:44.604505621 +0000 UTC m=+3566.653426556" watchObservedRunningTime="2025-11-28 17:57:44.607702262 +0000 UTC m=+3566.656623167" Nov 28 17:57:45 crc kubenswrapper[5024]: I1128 17:57:45.600858 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdfsm" event={"ID":"2c978fd1-2f49-402e-97a6-f79162bb8e91","Type":"ContainerStarted","Data":"748ce089104531940b32c31fb7a4f641d77c0f61eb344b9423ff3196094c9387"} Nov 28 17:57:46 crc kubenswrapper[5024]: I1128 17:57:46.618755 5024 generic.go:334] "Generic (PLEG): container finished" podID="2c978fd1-2f49-402e-97a6-f79162bb8e91" containerID="748ce089104531940b32c31fb7a4f641d77c0f61eb344b9423ff3196094c9387" exitCode=0 Nov 28 17:57:46 crc kubenswrapper[5024]: I1128 17:57:46.619167 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdfsm" event={"ID":"2c978fd1-2f49-402e-97a6-f79162bb8e91","Type":"ContainerDied","Data":"748ce089104531940b32c31fb7a4f641d77c0f61eb344b9423ff3196094c9387"} Nov 28 17:57:47 crc kubenswrapper[5024]: I1128 17:57:47.632890 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdfsm" event={"ID":"2c978fd1-2f49-402e-97a6-f79162bb8e91","Type":"ContainerStarted","Data":"c313436fda92aff5efd4209b19c0d42a505930be7b646a15cf7a91ce61f9d058"} Nov 28 17:57:47 crc kubenswrapper[5024]: I1128 17:57:47.663264 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tdfsm" podStartSLOduration=2.9781234 podStartE2EDuration="5.663246004s" podCreationTimestamp="2025-11-28 17:57:42 +0000 UTC" firstStartedPulling="2025-11-28 17:57:44.592861542 +0000 UTC m=+3566.641782447" lastFinishedPulling="2025-11-28 17:57:47.277984136 +0000 UTC m=+3569.326905051" observedRunningTime="2025-11-28 17:57:47.653812007 +0000 UTC m=+3569.702732902" watchObservedRunningTime="2025-11-28 17:57:47.663246004 +0000 UTC m=+3569.712166909" Nov 28 17:57:50 crc kubenswrapper[5024]: I1128 17:57:50.304892 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:50 crc kubenswrapper[5024]: I1128 17:57:50.305690 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:50 crc kubenswrapper[5024]: I1128 17:57:50.364798 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:50 crc kubenswrapper[5024]: I1128 17:57:50.714942 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:51 crc kubenswrapper[5024]: I1128 17:57:51.722061 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-szzsn"] Nov 28 17:57:52 crc kubenswrapper[5024]: I1128 17:57:52.663731 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:52 crc kubenswrapper[5024]: I1128 17:57:52.664087 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:52 crc kubenswrapper[5024]: I1128 17:57:52.712494 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:52 crc kubenswrapper[5024]: I1128 17:57:52.762196 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:53 crc kubenswrapper[5024]: I1128 17:57:53.695653 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-szzsn" podUID="cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" containerName="registry-server" containerID="cri-o://58651abe9a2ca23d5031da180566335ead07862505ed92038615971295fcb246" gracePeriod=2 Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.155619 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tdfsm"] Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.513215 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.565801 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-utilities\") pod \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\" (UID: \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\") " Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.565972 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-catalog-content\") pod \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\" (UID: \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\") " Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.566225 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp87c\" (UniqueName: \"kubernetes.io/projected/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-kube-api-access-dp87c\") pod \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\" (UID: \"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6\") " Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.566563 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-utilities" (OuterVolumeSpecName: "utilities") pod "cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" (UID: "cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.567174 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.571657 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-kube-api-access-dp87c" (OuterVolumeSpecName: "kube-api-access-dp87c") pod "cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" (UID: "cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6"). InnerVolumeSpecName "kube-api-access-dp87c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.580887 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" (UID: "cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.671245 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp87c\" (UniqueName: \"kubernetes.io/projected/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-kube-api-access-dp87c\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.671699 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.708084 5024 generic.go:334] "Generic (PLEG): container finished" podID="cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" containerID="58651abe9a2ca23d5031da180566335ead07862505ed92038615971295fcb246" exitCode=0 Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.708162 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-szzsn" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.708177 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szzsn" event={"ID":"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6","Type":"ContainerDied","Data":"58651abe9a2ca23d5031da180566335ead07862505ed92038615971295fcb246"} Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.708551 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-szzsn" event={"ID":"cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6","Type":"ContainerDied","Data":"f77f340c2d57a7769c92d5e9970a7807ee53447550d97846d50c176802efdb16"} Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.708607 5024 scope.go:117] "RemoveContainer" containerID="58651abe9a2ca23d5031da180566335ead07862505ed92038615971295fcb246" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.708720 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tdfsm" podUID="2c978fd1-2f49-402e-97a6-f79162bb8e91" containerName="registry-server" containerID="cri-o://c313436fda92aff5efd4209b19c0d42a505930be7b646a15cf7a91ce61f9d058" gracePeriod=2 Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.755512 5024 scope.go:117] "RemoveContainer" containerID="ae62d021701aa0bd805e71c07eee2e320c2d1bc468e1ede1973a5428426c38aa" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.762300 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-szzsn"] Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.773492 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-szzsn"] Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.834718 5024 scope.go:117] "RemoveContainer" containerID="5bee8f0676d4337015c06314cc7f2ba5e7f91fcc4055cc91ebc18593e771e34b" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.945208 5024 scope.go:117] "RemoveContainer" containerID="58651abe9a2ca23d5031da180566335ead07862505ed92038615971295fcb246" Nov 28 17:57:54 crc kubenswrapper[5024]: E1128 17:57:54.945718 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58651abe9a2ca23d5031da180566335ead07862505ed92038615971295fcb246\": container with ID starting with 58651abe9a2ca23d5031da180566335ead07862505ed92038615971295fcb246 not found: ID does not exist" containerID="58651abe9a2ca23d5031da180566335ead07862505ed92038615971295fcb246" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.945757 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58651abe9a2ca23d5031da180566335ead07862505ed92038615971295fcb246"} err="failed to get container status \"58651abe9a2ca23d5031da180566335ead07862505ed92038615971295fcb246\": rpc error: code = NotFound desc = could not find container \"58651abe9a2ca23d5031da180566335ead07862505ed92038615971295fcb246\": container with ID starting with 58651abe9a2ca23d5031da180566335ead07862505ed92038615971295fcb246 not found: ID does not exist" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.945785 5024 scope.go:117] "RemoveContainer" containerID="ae62d021701aa0bd805e71c07eee2e320c2d1bc468e1ede1973a5428426c38aa" Nov 28 17:57:54 crc kubenswrapper[5024]: E1128 17:57:54.946081 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae62d021701aa0bd805e71c07eee2e320c2d1bc468e1ede1973a5428426c38aa\": container with ID starting with ae62d021701aa0bd805e71c07eee2e320c2d1bc468e1ede1973a5428426c38aa not found: ID does not exist" containerID="ae62d021701aa0bd805e71c07eee2e320c2d1bc468e1ede1973a5428426c38aa" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.946112 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae62d021701aa0bd805e71c07eee2e320c2d1bc468e1ede1973a5428426c38aa"} err="failed to get container status \"ae62d021701aa0bd805e71c07eee2e320c2d1bc468e1ede1973a5428426c38aa\": rpc error: code = NotFound desc = could not find container \"ae62d021701aa0bd805e71c07eee2e320c2d1bc468e1ede1973a5428426c38aa\": container with ID starting with ae62d021701aa0bd805e71c07eee2e320c2d1bc468e1ede1973a5428426c38aa not found: ID does not exist" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.946132 5024 scope.go:117] "RemoveContainer" containerID="5bee8f0676d4337015c06314cc7f2ba5e7f91fcc4055cc91ebc18593e771e34b" Nov 28 17:57:54 crc kubenswrapper[5024]: E1128 17:57:54.946817 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bee8f0676d4337015c06314cc7f2ba5e7f91fcc4055cc91ebc18593e771e34b\": container with ID starting with 5bee8f0676d4337015c06314cc7f2ba5e7f91fcc4055cc91ebc18593e771e34b not found: ID does not exist" containerID="5bee8f0676d4337015c06314cc7f2ba5e7f91fcc4055cc91ebc18593e771e34b" Nov 28 17:57:54 crc kubenswrapper[5024]: I1128 17:57:54.946846 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bee8f0676d4337015c06314cc7f2ba5e7f91fcc4055cc91ebc18593e771e34b"} err="failed to get container status \"5bee8f0676d4337015c06314cc7f2ba5e7f91fcc4055cc91ebc18593e771e34b\": rpc error: code = NotFound desc = could not find container \"5bee8f0676d4337015c06314cc7f2ba5e7f91fcc4055cc91ebc18593e771e34b\": container with ID starting with 5bee8f0676d4337015c06314cc7f2ba5e7f91fcc4055cc91ebc18593e771e34b not found: ID does not exist" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.285714 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.395290 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c978fd1-2f49-402e-97a6-f79162bb8e91-utilities\") pod \"2c978fd1-2f49-402e-97a6-f79162bb8e91\" (UID: \"2c978fd1-2f49-402e-97a6-f79162bb8e91\") " Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.395703 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c978fd1-2f49-402e-97a6-f79162bb8e91-catalog-content\") pod \"2c978fd1-2f49-402e-97a6-f79162bb8e91\" (UID: \"2c978fd1-2f49-402e-97a6-f79162bb8e91\") " Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.395793 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78bn5\" (UniqueName: \"kubernetes.io/projected/2c978fd1-2f49-402e-97a6-f79162bb8e91-kube-api-access-78bn5\") pod \"2c978fd1-2f49-402e-97a6-f79162bb8e91\" (UID: \"2c978fd1-2f49-402e-97a6-f79162bb8e91\") " Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.396170 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c978fd1-2f49-402e-97a6-f79162bb8e91-utilities" (OuterVolumeSpecName: "utilities") pod "2c978fd1-2f49-402e-97a6-f79162bb8e91" (UID: "2c978fd1-2f49-402e-97a6-f79162bb8e91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.396872 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c978fd1-2f49-402e-97a6-f79162bb8e91-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.400511 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c978fd1-2f49-402e-97a6-f79162bb8e91-kube-api-access-78bn5" (OuterVolumeSpecName: "kube-api-access-78bn5") pod "2c978fd1-2f49-402e-97a6-f79162bb8e91" (UID: "2c978fd1-2f49-402e-97a6-f79162bb8e91"). InnerVolumeSpecName "kube-api-access-78bn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.441460 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c978fd1-2f49-402e-97a6-f79162bb8e91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c978fd1-2f49-402e-97a6-f79162bb8e91" (UID: "2c978fd1-2f49-402e-97a6-f79162bb8e91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.498689 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78bn5\" (UniqueName: \"kubernetes.io/projected/2c978fd1-2f49-402e-97a6-f79162bb8e91-kube-api-access-78bn5\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.498723 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c978fd1-2f49-402e-97a6-f79162bb8e91-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.720379 5024 generic.go:334] "Generic (PLEG): container finished" podID="2c978fd1-2f49-402e-97a6-f79162bb8e91" containerID="c313436fda92aff5efd4209b19c0d42a505930be7b646a15cf7a91ce61f9d058" exitCode=0 Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.720449 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tdfsm" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.720452 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdfsm" event={"ID":"2c978fd1-2f49-402e-97a6-f79162bb8e91","Type":"ContainerDied","Data":"c313436fda92aff5efd4209b19c0d42a505930be7b646a15cf7a91ce61f9d058"} Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.720504 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdfsm" event={"ID":"2c978fd1-2f49-402e-97a6-f79162bb8e91","Type":"ContainerDied","Data":"c2d1f63de4791e139522496080e13c2ddc973fc0b87f041f11e3a799f3eaaa8d"} Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.720525 5024 scope.go:117] "RemoveContainer" containerID="c313436fda92aff5efd4209b19c0d42a505930be7b646a15cf7a91ce61f9d058" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.752386 5024 scope.go:117] "RemoveContainer" containerID="748ce089104531940b32c31fb7a4f641d77c0f61eb344b9423ff3196094c9387" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.754575 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tdfsm"] Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.765466 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tdfsm"] Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.777780 5024 scope.go:117] "RemoveContainer" containerID="134683f5282470525ecbd124731fc4cf36ae5e83bee06d112f57e39f3774f174" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.798116 5024 scope.go:117] "RemoveContainer" containerID="c313436fda92aff5efd4209b19c0d42a505930be7b646a15cf7a91ce61f9d058" Nov 28 17:57:55 crc kubenswrapper[5024]: E1128 17:57:55.799266 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c313436fda92aff5efd4209b19c0d42a505930be7b646a15cf7a91ce61f9d058\": container with ID starting with c313436fda92aff5efd4209b19c0d42a505930be7b646a15cf7a91ce61f9d058 not found: ID does not exist" containerID="c313436fda92aff5efd4209b19c0d42a505930be7b646a15cf7a91ce61f9d058" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.799313 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c313436fda92aff5efd4209b19c0d42a505930be7b646a15cf7a91ce61f9d058"} err="failed to get container status \"c313436fda92aff5efd4209b19c0d42a505930be7b646a15cf7a91ce61f9d058\": rpc error: code = NotFound desc = could not find container \"c313436fda92aff5efd4209b19c0d42a505930be7b646a15cf7a91ce61f9d058\": container with ID starting with c313436fda92aff5efd4209b19c0d42a505930be7b646a15cf7a91ce61f9d058 not found: ID does not exist" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.799352 5024 scope.go:117] "RemoveContainer" containerID="748ce089104531940b32c31fb7a4f641d77c0f61eb344b9423ff3196094c9387" Nov 28 17:57:55 crc kubenswrapper[5024]: E1128 17:57:55.799922 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"748ce089104531940b32c31fb7a4f641d77c0f61eb344b9423ff3196094c9387\": container with ID starting with 748ce089104531940b32c31fb7a4f641d77c0f61eb344b9423ff3196094c9387 not found: ID does not exist" containerID="748ce089104531940b32c31fb7a4f641d77c0f61eb344b9423ff3196094c9387" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.799951 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"748ce089104531940b32c31fb7a4f641d77c0f61eb344b9423ff3196094c9387"} err="failed to get container status \"748ce089104531940b32c31fb7a4f641d77c0f61eb344b9423ff3196094c9387\": rpc error: code = NotFound desc = could not find container \"748ce089104531940b32c31fb7a4f641d77c0f61eb344b9423ff3196094c9387\": container with ID starting with 748ce089104531940b32c31fb7a4f641d77c0f61eb344b9423ff3196094c9387 not found: ID does not exist" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.799990 5024 scope.go:117] "RemoveContainer" containerID="134683f5282470525ecbd124731fc4cf36ae5e83bee06d112f57e39f3774f174" Nov 28 17:57:55 crc kubenswrapper[5024]: E1128 17:57:55.800533 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134683f5282470525ecbd124731fc4cf36ae5e83bee06d112f57e39f3774f174\": container with ID starting with 134683f5282470525ecbd124731fc4cf36ae5e83bee06d112f57e39f3774f174 not found: ID does not exist" containerID="134683f5282470525ecbd124731fc4cf36ae5e83bee06d112f57e39f3774f174" Nov 28 17:57:55 crc kubenswrapper[5024]: I1128 17:57:55.800595 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134683f5282470525ecbd124731fc4cf36ae5e83bee06d112f57e39f3774f174"} err="failed to get container status \"134683f5282470525ecbd124731fc4cf36ae5e83bee06d112f57e39f3774f174\": rpc error: code = NotFound desc = could not find container \"134683f5282470525ecbd124731fc4cf36ae5e83bee06d112f57e39f3774f174\": container with ID starting with 134683f5282470525ecbd124731fc4cf36ae5e83bee06d112f57e39f3774f174 not found: ID does not exist" Nov 28 17:57:56 crc kubenswrapper[5024]: I1128 17:57:56.513495 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c978fd1-2f49-402e-97a6-f79162bb8e91" path="/var/lib/kubelet/pods/2c978fd1-2f49-402e-97a6-f79162bb8e91/volumes" Nov 28 17:57:56 crc kubenswrapper[5024]: I1128 17:57:56.515291 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" path="/var/lib/kubelet/pods/cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6/volumes" Nov 28 17:58:17 crc kubenswrapper[5024]: E1128 17:58:17.183148 5024 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.141:49594->38.129.56.141:40169: write tcp 38.129.56.141:49594->38.129.56.141:40169: write: broken pipe Nov 28 17:58:37 crc kubenswrapper[5024]: I1128 17:58:37.565157 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:58:37 crc kubenswrapper[5024]: I1128 17:58:37.565806 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:59:07 crc kubenswrapper[5024]: I1128 17:59:07.564760 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:59:07 crc kubenswrapper[5024]: I1128 17:59:07.565452 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:59:37 crc kubenswrapper[5024]: I1128 17:59:37.564644 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:59:37 crc kubenswrapper[5024]: I1128 17:59:37.565252 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:59:37 crc kubenswrapper[5024]: I1128 17:59:37.565316 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 17:59:37 crc kubenswrapper[5024]: I1128 17:59:37.566408 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:59:37 crc kubenswrapper[5024]: I1128 17:59:37.566461 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" gracePeriod=600 Nov 28 17:59:37 crc kubenswrapper[5024]: E1128 17:59:37.700682 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:59:37 crc kubenswrapper[5024]: I1128 17:59:37.920928 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" exitCode=0 Nov 28 17:59:37 crc kubenswrapper[5024]: I1128 17:59:37.920979 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24"} Nov 28 17:59:37 crc kubenswrapper[5024]: I1128 17:59:37.921036 5024 scope.go:117] "RemoveContainer" containerID="eecbdf1b23b7c67babb6c2fb4f15aa22e093798829c4c79e5fd5f5976bee3a4c" Nov 28 17:59:37 crc kubenswrapper[5024]: I1128 17:59:37.922009 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 17:59:37 crc kubenswrapper[5024]: E1128 17:59:37.922436 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 17:59:49 crc kubenswrapper[5024]: I1128 17:59:49.498061 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 17:59:49 crc kubenswrapper[5024]: E1128 17:59:49.499117 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.179384 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d"] Nov 28 18:00:00 crc kubenswrapper[5024]: E1128 18:00:00.180684 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c978fd1-2f49-402e-97a6-f79162bb8e91" containerName="extract-content" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.180704 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c978fd1-2f49-402e-97a6-f79162bb8e91" containerName="extract-content" Nov 28 18:00:00 crc kubenswrapper[5024]: E1128 18:00:00.180728 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" containerName="extract-utilities" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.180737 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" containerName="extract-utilities" Nov 28 18:00:00 crc kubenswrapper[5024]: E1128 18:00:00.180782 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" containerName="registry-server" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.180791 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" containerName="registry-server" Nov 28 18:00:00 crc kubenswrapper[5024]: E1128 18:00:00.180814 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" containerName="extract-content" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.180821 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" containerName="extract-content" Nov 28 18:00:00 crc kubenswrapper[5024]: E1128 18:00:00.180838 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c978fd1-2f49-402e-97a6-f79162bb8e91" containerName="registry-server" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.180845 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c978fd1-2f49-402e-97a6-f79162bb8e91" containerName="registry-server" Nov 28 18:00:00 crc kubenswrapper[5024]: E1128 18:00:00.180865 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c978fd1-2f49-402e-97a6-f79162bb8e91" containerName="extract-utilities" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.180877 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c978fd1-2f49-402e-97a6-f79162bb8e91" containerName="extract-utilities" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.181222 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc1b0c6e-9cd0-4e20-a08a-2b7b87031eb6" containerName="registry-server" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.181261 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c978fd1-2f49-402e-97a6-f79162bb8e91" containerName="registry-server" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.182562 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.185695 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.193457 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.206703 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d"] Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.209972 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d20d89c0-adc6-4d04-976c-454c89ec777e-secret-volume\") pod \"collect-profiles-29405880-w8v7d\" (UID: \"d20d89c0-adc6-4d04-976c-454c89ec777e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.210170 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbrp6\" (UniqueName: \"kubernetes.io/projected/d20d89c0-adc6-4d04-976c-454c89ec777e-kube-api-access-cbrp6\") pod \"collect-profiles-29405880-w8v7d\" (UID: \"d20d89c0-adc6-4d04-976c-454c89ec777e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.210214 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d20d89c0-adc6-4d04-976c-454c89ec777e-config-volume\") pod \"collect-profiles-29405880-w8v7d\" (UID: \"d20d89c0-adc6-4d04-976c-454c89ec777e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.312670 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d20d89c0-adc6-4d04-976c-454c89ec777e-secret-volume\") pod \"collect-profiles-29405880-w8v7d\" (UID: \"d20d89c0-adc6-4d04-976c-454c89ec777e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.312751 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbrp6\" (UniqueName: \"kubernetes.io/projected/d20d89c0-adc6-4d04-976c-454c89ec777e-kube-api-access-cbrp6\") pod \"collect-profiles-29405880-w8v7d\" (UID: \"d20d89c0-adc6-4d04-976c-454c89ec777e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.312775 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d20d89c0-adc6-4d04-976c-454c89ec777e-config-volume\") pod \"collect-profiles-29405880-w8v7d\" (UID: \"d20d89c0-adc6-4d04-976c-454c89ec777e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.313916 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d20d89c0-adc6-4d04-976c-454c89ec777e-config-volume\") pod \"collect-profiles-29405880-w8v7d\" (UID: \"d20d89c0-adc6-4d04-976c-454c89ec777e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.321690 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d20d89c0-adc6-4d04-976c-454c89ec777e-secret-volume\") pod \"collect-profiles-29405880-w8v7d\" (UID: \"d20d89c0-adc6-4d04-976c-454c89ec777e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.332521 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbrp6\" (UniqueName: \"kubernetes.io/projected/d20d89c0-adc6-4d04-976c-454c89ec777e-kube-api-access-cbrp6\") pod \"collect-profiles-29405880-w8v7d\" (UID: \"d20d89c0-adc6-4d04-976c-454c89ec777e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" Nov 28 18:00:00 crc kubenswrapper[5024]: I1128 18:00:00.517156 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" Nov 28 18:00:01 crc kubenswrapper[5024]: I1128 18:00:01.041250 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d"] Nov 28 18:00:01 crc kubenswrapper[5024]: I1128 18:00:01.177670 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" event={"ID":"d20d89c0-adc6-4d04-976c-454c89ec777e","Type":"ContainerStarted","Data":"3199229c7663b831dbb2c2ceaad939d672de2ceb7be0a12d4cabb85cb2e6d6cc"} Nov 28 18:00:02 crc kubenswrapper[5024]: I1128 18:00:02.191998 5024 generic.go:334] "Generic (PLEG): container finished" podID="d20d89c0-adc6-4d04-976c-454c89ec777e" containerID="04de4ce793ef9dae183496ccc1572dda3d4d67d4709d19b832b649d62a0669bd" exitCode=0 Nov 28 18:00:02 crc kubenswrapper[5024]: I1128 18:00:02.192158 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" event={"ID":"d20d89c0-adc6-4d04-976c-454c89ec777e","Type":"ContainerDied","Data":"04de4ce793ef9dae183496ccc1572dda3d4d67d4709d19b832b649d62a0669bd"} Nov 28 18:00:02 crc kubenswrapper[5024]: I1128 18:00:02.498604 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:00:02 crc kubenswrapper[5024]: E1128 18:00:02.499843 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:00:03 crc kubenswrapper[5024]: I1128 18:00:03.642521 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" Nov 28 18:00:03 crc kubenswrapper[5024]: I1128 18:00:03.714489 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d20d89c0-adc6-4d04-976c-454c89ec777e-secret-volume\") pod \"d20d89c0-adc6-4d04-976c-454c89ec777e\" (UID: \"d20d89c0-adc6-4d04-976c-454c89ec777e\") " Nov 28 18:00:03 crc kubenswrapper[5024]: I1128 18:00:03.714656 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbrp6\" (UniqueName: \"kubernetes.io/projected/d20d89c0-adc6-4d04-976c-454c89ec777e-kube-api-access-cbrp6\") pod \"d20d89c0-adc6-4d04-976c-454c89ec777e\" (UID: \"d20d89c0-adc6-4d04-976c-454c89ec777e\") " Nov 28 18:00:03 crc kubenswrapper[5024]: I1128 18:00:03.716857 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d20d89c0-adc6-4d04-976c-454c89ec777e-config-volume\") pod \"d20d89c0-adc6-4d04-976c-454c89ec777e\" (UID: \"d20d89c0-adc6-4d04-976c-454c89ec777e\") " Nov 28 18:00:03 crc kubenswrapper[5024]: I1128 18:00:03.718161 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d20d89c0-adc6-4d04-976c-454c89ec777e-config-volume" (OuterVolumeSpecName: "config-volume") pod "d20d89c0-adc6-4d04-976c-454c89ec777e" (UID: "d20d89c0-adc6-4d04-976c-454c89ec777e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 18:00:03 crc kubenswrapper[5024]: I1128 18:00:03.719076 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d20d89c0-adc6-4d04-976c-454c89ec777e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 18:00:03 crc kubenswrapper[5024]: I1128 18:00:03.721239 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d20d89c0-adc6-4d04-976c-454c89ec777e-kube-api-access-cbrp6" (OuterVolumeSpecName: "kube-api-access-cbrp6") pod "d20d89c0-adc6-4d04-976c-454c89ec777e" (UID: "d20d89c0-adc6-4d04-976c-454c89ec777e"). InnerVolumeSpecName "kube-api-access-cbrp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:00:03 crc kubenswrapper[5024]: I1128 18:00:03.737102 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d20d89c0-adc6-4d04-976c-454c89ec777e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d20d89c0-adc6-4d04-976c-454c89ec777e" (UID: "d20d89c0-adc6-4d04-976c-454c89ec777e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 18:00:03 crc kubenswrapper[5024]: I1128 18:00:03.823618 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbrp6\" (UniqueName: \"kubernetes.io/projected/d20d89c0-adc6-4d04-976c-454c89ec777e-kube-api-access-cbrp6\") on node \"crc\" DevicePath \"\"" Nov 28 18:00:03 crc kubenswrapper[5024]: I1128 18:00:03.823663 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d20d89c0-adc6-4d04-976c-454c89ec777e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 18:00:04 crc kubenswrapper[5024]: I1128 18:00:04.227167 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" event={"ID":"d20d89c0-adc6-4d04-976c-454c89ec777e","Type":"ContainerDied","Data":"3199229c7663b831dbb2c2ceaad939d672de2ceb7be0a12d4cabb85cb2e6d6cc"} Nov 28 18:00:04 crc kubenswrapper[5024]: I1128 18:00:04.227489 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3199229c7663b831dbb2c2ceaad939d672de2ceb7be0a12d4cabb85cb2e6d6cc" Nov 28 18:00:04 crc kubenswrapper[5024]: I1128 18:00:04.227222 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d" Nov 28 18:00:04 crc kubenswrapper[5024]: I1128 18:00:04.724757 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8"] Nov 28 18:00:04 crc kubenswrapper[5024]: I1128 18:00:04.736372 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405835-hz5m8"] Nov 28 18:00:06 crc kubenswrapper[5024]: I1128 18:00:06.516001 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78b6b2cf-174e-47d6-8532-b7cff728a185" path="/var/lib/kubelet/pods/78b6b2cf-174e-47d6-8532-b7cff728a185/volumes" Nov 28 18:00:14 crc kubenswrapper[5024]: I1128 18:00:14.498168 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:00:14 crc kubenswrapper[5024]: E1128 18:00:14.499117 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:00:25 crc kubenswrapper[5024]: I1128 18:00:25.498884 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:00:25 crc kubenswrapper[5024]: E1128 18:00:25.499913 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:00:39 crc kubenswrapper[5024]: I1128 18:00:39.498545 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:00:39 crc kubenswrapper[5024]: E1128 18:00:39.499333 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:00:52 crc kubenswrapper[5024]: I1128 18:00:52.498441 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:00:52 crc kubenswrapper[5024]: E1128 18:00:52.499287 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:00:57 crc kubenswrapper[5024]: I1128 18:00:57.807720 5024 scope.go:117] "RemoveContainer" containerID="d1eb27dbbb8813f7b95a9e70f4a44d3007d749f0a1dd58d00ac1b20c4dcce34a" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.155448 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29405881-9766r"] Nov 28 18:01:00 crc kubenswrapper[5024]: E1128 18:01:00.156386 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d20d89c0-adc6-4d04-976c-454c89ec777e" containerName="collect-profiles" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.156401 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20d89c0-adc6-4d04-976c-454c89ec777e" containerName="collect-profiles" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.156632 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="d20d89c0-adc6-4d04-976c-454c89ec777e" containerName="collect-profiles" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.157519 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.168745 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29405881-9766r"] Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.254093 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-fernet-keys\") pod \"keystone-cron-29405881-9766r\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.254173 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-config-data\") pod \"keystone-cron-29405881-9766r\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.254257 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpqls\" (UniqueName: \"kubernetes.io/projected/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-kube-api-access-fpqls\") pod \"keystone-cron-29405881-9766r\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.254399 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-combined-ca-bundle\") pod \"keystone-cron-29405881-9766r\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.356952 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-fernet-keys\") pod \"keystone-cron-29405881-9766r\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.357067 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-config-data\") pod \"keystone-cron-29405881-9766r\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.357118 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpqls\" (UniqueName: \"kubernetes.io/projected/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-kube-api-access-fpqls\") pod \"keystone-cron-29405881-9766r\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.357251 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-combined-ca-bundle\") pod \"keystone-cron-29405881-9766r\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.363303 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-combined-ca-bundle\") pod \"keystone-cron-29405881-9766r\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.363653 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-fernet-keys\") pod \"keystone-cron-29405881-9766r\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.364536 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-config-data\") pod \"keystone-cron-29405881-9766r\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.372895 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpqls\" (UniqueName: \"kubernetes.io/projected/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-kube-api-access-fpqls\") pod \"keystone-cron-29405881-9766r\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:00 crc kubenswrapper[5024]: I1128 18:01:00.476591 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:01 crc kubenswrapper[5024]: I1128 18:01:01.006086 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29405881-9766r"] Nov 28 18:01:01 crc kubenswrapper[5024]: I1128 18:01:01.894520 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405881-9766r" event={"ID":"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1","Type":"ContainerStarted","Data":"5147f8dd4103ae23a31aca9f587b5f12acac27c3e7ad442217b5c9392c41bc96"} Nov 28 18:01:01 crc kubenswrapper[5024]: I1128 18:01:01.894812 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405881-9766r" event={"ID":"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1","Type":"ContainerStarted","Data":"d89aaa15fc486e9a766b54ddf0420b7918269999d3445024d49cb99c36768005"} Nov 28 18:01:01 crc kubenswrapper[5024]: I1128 18:01:01.920818 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29405881-9766r" podStartSLOduration=1.920795752 podStartE2EDuration="1.920795752s" podCreationTimestamp="2025-11-28 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 18:01:01.910594621 +0000 UTC m=+3763.959515526" watchObservedRunningTime="2025-11-28 18:01:01.920795752 +0000 UTC m=+3763.969716657" Nov 28 18:01:03 crc kubenswrapper[5024]: I1128 18:01:03.915419 5024 generic.go:334] "Generic (PLEG): container finished" podID="42a7d1a5-e99a-47e1-aeb7-20974f1a50a1" containerID="5147f8dd4103ae23a31aca9f587b5f12acac27c3e7ad442217b5c9392c41bc96" exitCode=0 Nov 28 18:01:03 crc kubenswrapper[5024]: I1128 18:01:03.915497 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405881-9766r" event={"ID":"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1","Type":"ContainerDied","Data":"5147f8dd4103ae23a31aca9f587b5f12acac27c3e7ad442217b5c9392c41bc96"} Nov 28 18:01:04 crc kubenswrapper[5024]: I1128 18:01:04.498669 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:01:04 crc kubenswrapper[5024]: E1128 18:01:04.499479 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.384103 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.492276 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-combined-ca-bundle\") pod \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.492463 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-fernet-keys\") pod \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.492539 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-config-data\") pod \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.492622 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpqls\" (UniqueName: \"kubernetes.io/projected/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-kube-api-access-fpqls\") pod \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\" (UID: \"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1\") " Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.499661 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "42a7d1a5-e99a-47e1-aeb7-20974f1a50a1" (UID: "42a7d1a5-e99a-47e1-aeb7-20974f1a50a1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.501310 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-kube-api-access-fpqls" (OuterVolumeSpecName: "kube-api-access-fpqls") pod "42a7d1a5-e99a-47e1-aeb7-20974f1a50a1" (UID: "42a7d1a5-e99a-47e1-aeb7-20974f1a50a1"). InnerVolumeSpecName "kube-api-access-fpqls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.534793 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42a7d1a5-e99a-47e1-aeb7-20974f1a50a1" (UID: "42a7d1a5-e99a-47e1-aeb7-20974f1a50a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.655616 5024 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.655642 5024 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.655652 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpqls\" (UniqueName: \"kubernetes.io/projected/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-kube-api-access-fpqls\") on node \"crc\" DevicePath \"\"" Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.658180 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-config-data" (OuterVolumeSpecName: "config-data") pod "42a7d1a5-e99a-47e1-aeb7-20974f1a50a1" (UID: "42a7d1a5-e99a-47e1-aeb7-20974f1a50a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.758856 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42a7d1a5-e99a-47e1-aeb7-20974f1a50a1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.943725 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405881-9766r" event={"ID":"42a7d1a5-e99a-47e1-aeb7-20974f1a50a1","Type":"ContainerDied","Data":"d89aaa15fc486e9a766b54ddf0420b7918269999d3445024d49cb99c36768005"} Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.944343 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d89aaa15fc486e9a766b54ddf0420b7918269999d3445024d49cb99c36768005" Nov 28 18:01:05 crc kubenswrapper[5024]: I1128 18:01:05.943816 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405881-9766r" Nov 28 18:01:19 crc kubenswrapper[5024]: I1128 18:01:19.499164 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:01:19 crc kubenswrapper[5024]: E1128 18:01:19.500441 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:01:30 crc kubenswrapper[5024]: I1128 18:01:30.498448 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:01:30 crc kubenswrapper[5024]: E1128 18:01:30.499275 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:01:41 crc kubenswrapper[5024]: I1128 18:01:41.498614 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:01:41 crc kubenswrapper[5024]: E1128 18:01:41.499871 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:01:52 crc kubenswrapper[5024]: I1128 18:01:52.498956 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:01:52 crc kubenswrapper[5024]: E1128 18:01:52.500199 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:02:06 crc kubenswrapper[5024]: I1128 18:02:06.499134 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:02:06 crc kubenswrapper[5024]: E1128 18:02:06.499876 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:02:17 crc kubenswrapper[5024]: I1128 18:02:17.498154 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:02:17 crc kubenswrapper[5024]: E1128 18:02:17.499091 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:02:32 crc kubenswrapper[5024]: I1128 18:02:32.498965 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:02:32 crc kubenswrapper[5024]: E1128 18:02:32.499796 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:02:45 crc kubenswrapper[5024]: I1128 18:02:45.498216 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:02:45 crc kubenswrapper[5024]: E1128 18:02:45.499105 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:02:57 crc kubenswrapper[5024]: I1128 18:02:57.498064 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:02:57 crc kubenswrapper[5024]: E1128 18:02:57.498794 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:03:10 crc kubenswrapper[5024]: I1128 18:03:10.499488 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:03:10 crc kubenswrapper[5024]: E1128 18:03:10.500229 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:03:24 crc kubenswrapper[5024]: I1128 18:03:24.498913 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:03:24 crc kubenswrapper[5024]: E1128 18:03:24.499761 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:03:38 crc kubenswrapper[5024]: I1128 18:03:38.506336 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:03:38 crc kubenswrapper[5024]: E1128 18:03:38.508385 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:03:52 crc kubenswrapper[5024]: I1128 18:03:52.498514 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:03:52 crc kubenswrapper[5024]: E1128 18:03:52.499708 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:04:07 crc kubenswrapper[5024]: I1128 18:04:07.498750 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:04:07 crc kubenswrapper[5024]: E1128 18:04:07.499568 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:04:18 crc kubenswrapper[5024]: I1128 18:04:18.513557 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:04:18 crc kubenswrapper[5024]: E1128 18:04:18.514316 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:04:31 crc kubenswrapper[5024]: I1128 18:04:31.497849 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:04:31 crc kubenswrapper[5024]: E1128 18:04:31.498781 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:04:43 crc kubenswrapper[5024]: I1128 18:04:43.498450 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:04:44 crc kubenswrapper[5024]: I1128 18:04:44.588349 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"5badbbac41c54237c3ecda45bd378943c00ae3e6a05816f76e05c62d8cb043e1"} Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.581821 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t7kj8"] Nov 28 18:05:48 crc kubenswrapper[5024]: E1128 18:05:48.583220 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a7d1a5-e99a-47e1-aeb7-20974f1a50a1" containerName="keystone-cron" Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.583245 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a7d1a5-e99a-47e1-aeb7-20974f1a50a1" containerName="keystone-cron" Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.583593 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="42a7d1a5-e99a-47e1-aeb7-20974f1a50a1" containerName="keystone-cron" Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.585692 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.606920 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7kj8"] Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.716113 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b55a54c7-edcc-4262-88e3-0908be6832b8-catalog-content\") pod \"certified-operators-t7kj8\" (UID: \"b55a54c7-edcc-4262-88e3-0908be6832b8\") " pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.716236 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cft72\" (UniqueName: \"kubernetes.io/projected/b55a54c7-edcc-4262-88e3-0908be6832b8-kube-api-access-cft72\") pod \"certified-operators-t7kj8\" (UID: \"b55a54c7-edcc-4262-88e3-0908be6832b8\") " pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.716260 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b55a54c7-edcc-4262-88e3-0908be6832b8-utilities\") pod \"certified-operators-t7kj8\" (UID: \"b55a54c7-edcc-4262-88e3-0908be6832b8\") " pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.818680 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b55a54c7-edcc-4262-88e3-0908be6832b8-catalog-content\") pod \"certified-operators-t7kj8\" (UID: \"b55a54c7-edcc-4262-88e3-0908be6832b8\") " pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.818809 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cft72\" (UniqueName: \"kubernetes.io/projected/b55a54c7-edcc-4262-88e3-0908be6832b8-kube-api-access-cft72\") pod \"certified-operators-t7kj8\" (UID: \"b55a54c7-edcc-4262-88e3-0908be6832b8\") " pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.818841 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b55a54c7-edcc-4262-88e3-0908be6832b8-utilities\") pod \"certified-operators-t7kj8\" (UID: \"b55a54c7-edcc-4262-88e3-0908be6832b8\") " pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.820062 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b55a54c7-edcc-4262-88e3-0908be6832b8-catalog-content\") pod \"certified-operators-t7kj8\" (UID: \"b55a54c7-edcc-4262-88e3-0908be6832b8\") " pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.820191 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b55a54c7-edcc-4262-88e3-0908be6832b8-utilities\") pod \"certified-operators-t7kj8\" (UID: \"b55a54c7-edcc-4262-88e3-0908be6832b8\") " pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.841755 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cft72\" (UniqueName: \"kubernetes.io/projected/b55a54c7-edcc-4262-88e3-0908be6832b8-kube-api-access-cft72\") pod \"certified-operators-t7kj8\" (UID: \"b55a54c7-edcc-4262-88e3-0908be6832b8\") " pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:05:48 crc kubenswrapper[5024]: I1128 18:05:48.915913 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:05:49 crc kubenswrapper[5024]: I1128 18:05:49.591946 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7kj8"] Nov 28 18:05:50 crc kubenswrapper[5024]: I1128 18:05:50.330054 5024 generic.go:334] "Generic (PLEG): container finished" podID="b55a54c7-edcc-4262-88e3-0908be6832b8" containerID="4fc76a110c020ecadad9626258a47cbc533bcc85364194eed8e8333376284232" exitCode=0 Nov 28 18:05:50 crc kubenswrapper[5024]: I1128 18:05:50.330104 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7kj8" event={"ID":"b55a54c7-edcc-4262-88e3-0908be6832b8","Type":"ContainerDied","Data":"4fc76a110c020ecadad9626258a47cbc533bcc85364194eed8e8333376284232"} Nov 28 18:05:50 crc kubenswrapper[5024]: I1128 18:05:50.330398 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7kj8" event={"ID":"b55a54c7-edcc-4262-88e3-0908be6832b8","Type":"ContainerStarted","Data":"255ccb8861a665bc16c6307fca0128cded0dc3c043984fe9ecc2e2881ed82eb3"} Nov 28 18:05:50 crc kubenswrapper[5024]: I1128 18:05:50.332976 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 18:05:52 crc kubenswrapper[5024]: I1128 18:05:52.351497 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7kj8" event={"ID":"b55a54c7-edcc-4262-88e3-0908be6832b8","Type":"ContainerStarted","Data":"9ac17b49003e2124a74725338dbcd954d1d36a43ef9df3aaf47e8950432b14cf"} Nov 28 18:05:54 crc kubenswrapper[5024]: I1128 18:05:54.384671 5024 generic.go:334] "Generic (PLEG): container finished" podID="b55a54c7-edcc-4262-88e3-0908be6832b8" containerID="9ac17b49003e2124a74725338dbcd954d1d36a43ef9df3aaf47e8950432b14cf" exitCode=0 Nov 28 18:05:54 crc kubenswrapper[5024]: I1128 18:05:54.384759 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7kj8" event={"ID":"b55a54c7-edcc-4262-88e3-0908be6832b8","Type":"ContainerDied","Data":"9ac17b49003e2124a74725338dbcd954d1d36a43ef9df3aaf47e8950432b14cf"} Nov 28 18:05:55 crc kubenswrapper[5024]: I1128 18:05:55.398751 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7kj8" event={"ID":"b55a54c7-edcc-4262-88e3-0908be6832b8","Type":"ContainerStarted","Data":"07e078f0ca95f67b033d3cf31beafaa9adf290fbb9a664503bcdbdff9c77f35b"} Nov 28 18:05:55 crc kubenswrapper[5024]: I1128 18:05:55.432656 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t7kj8" podStartSLOduration=2.85514797 podStartE2EDuration="7.432620602s" podCreationTimestamp="2025-11-28 18:05:48 +0000 UTC" firstStartedPulling="2025-11-28 18:05:50.332721811 +0000 UTC m=+4052.381642716" lastFinishedPulling="2025-11-28 18:05:54.910194443 +0000 UTC m=+4056.959115348" observedRunningTime="2025-11-28 18:05:55.42096248 +0000 UTC m=+4057.469883385" watchObservedRunningTime="2025-11-28 18:05:55.432620602 +0000 UTC m=+4057.481541507" Nov 28 18:05:58 crc kubenswrapper[5024]: I1128 18:05:58.916364 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:05:58 crc kubenswrapper[5024]: I1128 18:05:58.916955 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:05:58 crc kubenswrapper[5024]: I1128 18:05:58.974326 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:06:07 crc kubenswrapper[5024]: I1128 18:06:07.784495 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="4ff0447c-7f25-4d0a-a58b-d5fff6673749" containerName="ovn-northd" probeResult="failure" output="command timed out" Nov 28 18:06:09 crc kubenswrapper[5024]: I1128 18:06:09.447656 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:06:09 crc kubenswrapper[5024]: I1128 18:06:09.526962 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t7kj8"] Nov 28 18:06:09 crc kubenswrapper[5024]: I1128 18:06:09.565827 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t7kj8" podUID="b55a54c7-edcc-4262-88e3-0908be6832b8" containerName="registry-server" containerID="cri-o://07e078f0ca95f67b033d3cf31beafaa9adf290fbb9a664503bcdbdff9c77f35b" gracePeriod=2 Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.106395 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.200399 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b55a54c7-edcc-4262-88e3-0908be6832b8-utilities\") pod \"b55a54c7-edcc-4262-88e3-0908be6832b8\" (UID: \"b55a54c7-edcc-4262-88e3-0908be6832b8\") " Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.200467 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cft72\" (UniqueName: \"kubernetes.io/projected/b55a54c7-edcc-4262-88e3-0908be6832b8-kube-api-access-cft72\") pod \"b55a54c7-edcc-4262-88e3-0908be6832b8\" (UID: \"b55a54c7-edcc-4262-88e3-0908be6832b8\") " Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.201276 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b55a54c7-edcc-4262-88e3-0908be6832b8-utilities" (OuterVolumeSpecName: "utilities") pod "b55a54c7-edcc-4262-88e3-0908be6832b8" (UID: "b55a54c7-edcc-4262-88e3-0908be6832b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.218419 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b55a54c7-edcc-4262-88e3-0908be6832b8-kube-api-access-cft72" (OuterVolumeSpecName: "kube-api-access-cft72") pod "b55a54c7-edcc-4262-88e3-0908be6832b8" (UID: "b55a54c7-edcc-4262-88e3-0908be6832b8"). InnerVolumeSpecName "kube-api-access-cft72". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.302361 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b55a54c7-edcc-4262-88e3-0908be6832b8-catalog-content\") pod \"b55a54c7-edcc-4262-88e3-0908be6832b8\" (UID: \"b55a54c7-edcc-4262-88e3-0908be6832b8\") " Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.303366 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b55a54c7-edcc-4262-88e3-0908be6832b8-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.303396 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cft72\" (UniqueName: \"kubernetes.io/projected/b55a54c7-edcc-4262-88e3-0908be6832b8-kube-api-access-cft72\") on node \"crc\" DevicePath \"\"" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.347296 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b55a54c7-edcc-4262-88e3-0908be6832b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b55a54c7-edcc-4262-88e3-0908be6832b8" (UID: "b55a54c7-edcc-4262-88e3-0908be6832b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.405514 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b55a54c7-edcc-4262-88e3-0908be6832b8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.579610 5024 generic.go:334] "Generic (PLEG): container finished" podID="b55a54c7-edcc-4262-88e3-0908be6832b8" containerID="07e078f0ca95f67b033d3cf31beafaa9adf290fbb9a664503bcdbdff9c77f35b" exitCode=0 Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.579703 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7kj8" event={"ID":"b55a54c7-edcc-4262-88e3-0908be6832b8","Type":"ContainerDied","Data":"07e078f0ca95f67b033d3cf31beafaa9adf290fbb9a664503bcdbdff9c77f35b"} Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.580107 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7kj8" event={"ID":"b55a54c7-edcc-4262-88e3-0908be6832b8","Type":"ContainerDied","Data":"255ccb8861a665bc16c6307fca0128cded0dc3c043984fe9ecc2e2881ed82eb3"} Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.579872 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7kj8" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.580137 5024 scope.go:117] "RemoveContainer" containerID="07e078f0ca95f67b033d3cf31beafaa9adf290fbb9a664503bcdbdff9c77f35b" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.606234 5024 scope.go:117] "RemoveContainer" containerID="9ac17b49003e2124a74725338dbcd954d1d36a43ef9df3aaf47e8950432b14cf" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.608986 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t7kj8"] Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.622620 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t7kj8"] Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.628637 5024 scope.go:117] "RemoveContainer" containerID="4fc76a110c020ecadad9626258a47cbc533bcc85364194eed8e8333376284232" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.684083 5024 scope.go:117] "RemoveContainer" containerID="07e078f0ca95f67b033d3cf31beafaa9adf290fbb9a664503bcdbdff9c77f35b" Nov 28 18:06:10 crc kubenswrapper[5024]: E1128 18:06:10.684621 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07e078f0ca95f67b033d3cf31beafaa9adf290fbb9a664503bcdbdff9c77f35b\": container with ID starting with 07e078f0ca95f67b033d3cf31beafaa9adf290fbb9a664503bcdbdff9c77f35b not found: ID does not exist" containerID="07e078f0ca95f67b033d3cf31beafaa9adf290fbb9a664503bcdbdff9c77f35b" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.684682 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07e078f0ca95f67b033d3cf31beafaa9adf290fbb9a664503bcdbdff9c77f35b"} err="failed to get container status \"07e078f0ca95f67b033d3cf31beafaa9adf290fbb9a664503bcdbdff9c77f35b\": rpc error: code = NotFound desc = could not find container \"07e078f0ca95f67b033d3cf31beafaa9adf290fbb9a664503bcdbdff9c77f35b\": container with ID starting with 07e078f0ca95f67b033d3cf31beafaa9adf290fbb9a664503bcdbdff9c77f35b not found: ID does not exist" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.684711 5024 scope.go:117] "RemoveContainer" containerID="9ac17b49003e2124a74725338dbcd954d1d36a43ef9df3aaf47e8950432b14cf" Nov 28 18:06:10 crc kubenswrapper[5024]: E1128 18:06:10.685997 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ac17b49003e2124a74725338dbcd954d1d36a43ef9df3aaf47e8950432b14cf\": container with ID starting with 9ac17b49003e2124a74725338dbcd954d1d36a43ef9df3aaf47e8950432b14cf not found: ID does not exist" containerID="9ac17b49003e2124a74725338dbcd954d1d36a43ef9df3aaf47e8950432b14cf" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.686051 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac17b49003e2124a74725338dbcd954d1d36a43ef9df3aaf47e8950432b14cf"} err="failed to get container status \"9ac17b49003e2124a74725338dbcd954d1d36a43ef9df3aaf47e8950432b14cf\": rpc error: code = NotFound desc = could not find container \"9ac17b49003e2124a74725338dbcd954d1d36a43ef9df3aaf47e8950432b14cf\": container with ID starting with 9ac17b49003e2124a74725338dbcd954d1d36a43ef9df3aaf47e8950432b14cf not found: ID does not exist" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.686079 5024 scope.go:117] "RemoveContainer" containerID="4fc76a110c020ecadad9626258a47cbc533bcc85364194eed8e8333376284232" Nov 28 18:06:10 crc kubenswrapper[5024]: E1128 18:06:10.686460 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc76a110c020ecadad9626258a47cbc533bcc85364194eed8e8333376284232\": container with ID starting with 4fc76a110c020ecadad9626258a47cbc533bcc85364194eed8e8333376284232 not found: ID does not exist" containerID="4fc76a110c020ecadad9626258a47cbc533bcc85364194eed8e8333376284232" Nov 28 18:06:10 crc kubenswrapper[5024]: I1128 18:06:10.686488 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc76a110c020ecadad9626258a47cbc533bcc85364194eed8e8333376284232"} err="failed to get container status \"4fc76a110c020ecadad9626258a47cbc533bcc85364194eed8e8333376284232\": rpc error: code = NotFound desc = could not find container \"4fc76a110c020ecadad9626258a47cbc533bcc85364194eed8e8333376284232\": container with ID starting with 4fc76a110c020ecadad9626258a47cbc533bcc85364194eed8e8333376284232 not found: ID does not exist" Nov 28 18:06:12 crc kubenswrapper[5024]: I1128 18:06:12.533006 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b55a54c7-edcc-4262-88e3-0908be6832b8" path="/var/lib/kubelet/pods/b55a54c7-edcc-4262-88e3-0908be6832b8/volumes" Nov 28 18:07:07 crc kubenswrapper[5024]: I1128 18:07:07.565577 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:07:07 crc kubenswrapper[5024]: I1128 18:07:07.566156 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.585175 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7jcs8"] Nov 28 18:07:19 crc kubenswrapper[5024]: E1128 18:07:19.586396 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55a54c7-edcc-4262-88e3-0908be6832b8" containerName="extract-utilities" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.586418 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55a54c7-edcc-4262-88e3-0908be6832b8" containerName="extract-utilities" Nov 28 18:07:19 crc kubenswrapper[5024]: E1128 18:07:19.586442 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55a54c7-edcc-4262-88e3-0908be6832b8" containerName="registry-server" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.586449 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55a54c7-edcc-4262-88e3-0908be6832b8" containerName="registry-server" Nov 28 18:07:19 crc kubenswrapper[5024]: E1128 18:07:19.586485 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55a54c7-edcc-4262-88e3-0908be6832b8" containerName="extract-content" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.586492 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55a54c7-edcc-4262-88e3-0908be6832b8" containerName="extract-content" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.586701 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="b55a54c7-edcc-4262-88e3-0908be6832b8" containerName="registry-server" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.588598 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.624070 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7jcs8"] Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.738369 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8hzj\" (UniqueName: \"kubernetes.io/projected/db92d617-2086-418f-8bd8-a76387986e2a-kube-api-access-n8hzj\") pod \"redhat-operators-7jcs8\" (UID: \"db92d617-2086-418f-8bd8-a76387986e2a\") " pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.738445 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db92d617-2086-418f-8bd8-a76387986e2a-catalog-content\") pod \"redhat-operators-7jcs8\" (UID: \"db92d617-2086-418f-8bd8-a76387986e2a\") " pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.738485 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db92d617-2086-418f-8bd8-a76387986e2a-utilities\") pod \"redhat-operators-7jcs8\" (UID: \"db92d617-2086-418f-8bd8-a76387986e2a\") " pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.840653 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8hzj\" (UniqueName: \"kubernetes.io/projected/db92d617-2086-418f-8bd8-a76387986e2a-kube-api-access-n8hzj\") pod \"redhat-operators-7jcs8\" (UID: \"db92d617-2086-418f-8bd8-a76387986e2a\") " pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.840750 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db92d617-2086-418f-8bd8-a76387986e2a-catalog-content\") pod \"redhat-operators-7jcs8\" (UID: \"db92d617-2086-418f-8bd8-a76387986e2a\") " pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.840803 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db92d617-2086-418f-8bd8-a76387986e2a-utilities\") pod \"redhat-operators-7jcs8\" (UID: \"db92d617-2086-418f-8bd8-a76387986e2a\") " pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.841353 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db92d617-2086-418f-8bd8-a76387986e2a-catalog-content\") pod \"redhat-operators-7jcs8\" (UID: \"db92d617-2086-418f-8bd8-a76387986e2a\") " pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.841379 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db92d617-2086-418f-8bd8-a76387986e2a-utilities\") pod \"redhat-operators-7jcs8\" (UID: \"db92d617-2086-418f-8bd8-a76387986e2a\") " pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.861353 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8hzj\" (UniqueName: \"kubernetes.io/projected/db92d617-2086-418f-8bd8-a76387986e2a-kube-api-access-n8hzj\") pod \"redhat-operators-7jcs8\" (UID: \"db92d617-2086-418f-8bd8-a76387986e2a\") " pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:19 crc kubenswrapper[5024]: I1128 18:07:19.912506 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:20 crc kubenswrapper[5024]: I1128 18:07:20.411598 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7jcs8"] Nov 28 18:07:20 crc kubenswrapper[5024]: I1128 18:07:20.438746 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jcs8" event={"ID":"db92d617-2086-418f-8bd8-a76387986e2a","Type":"ContainerStarted","Data":"4d3594c83ab1cf27707f54886d067004155e5100cd92bb812f8dbf18c203f8c4"} Nov 28 18:07:21 crc kubenswrapper[5024]: I1128 18:07:21.455859 5024 generic.go:334] "Generic (PLEG): container finished" podID="db92d617-2086-418f-8bd8-a76387986e2a" containerID="ac3fa9558a3405ccd4dfa56a5804311c9fd41b3d1f1ddc6a8a3eacbe1bb3168a" exitCode=0 Nov 28 18:07:21 crc kubenswrapper[5024]: I1128 18:07:21.455935 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jcs8" event={"ID":"db92d617-2086-418f-8bd8-a76387986e2a","Type":"ContainerDied","Data":"ac3fa9558a3405ccd4dfa56a5804311c9fd41b3d1f1ddc6a8a3eacbe1bb3168a"} Nov 28 18:07:23 crc kubenswrapper[5024]: I1128 18:07:23.483812 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jcs8" event={"ID":"db92d617-2086-418f-8bd8-a76387986e2a","Type":"ContainerStarted","Data":"2b18624260ea25ddeae03093513508aa7590a8b6f647de35b223167ea4cd9b05"} Nov 28 18:07:26 crc kubenswrapper[5024]: I1128 18:07:26.533859 5024 generic.go:334] "Generic (PLEG): container finished" podID="db92d617-2086-418f-8bd8-a76387986e2a" containerID="2b18624260ea25ddeae03093513508aa7590a8b6f647de35b223167ea4cd9b05" exitCode=0 Nov 28 18:07:26 crc kubenswrapper[5024]: I1128 18:07:26.533959 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jcs8" event={"ID":"db92d617-2086-418f-8bd8-a76387986e2a","Type":"ContainerDied","Data":"2b18624260ea25ddeae03093513508aa7590a8b6f647de35b223167ea4cd9b05"} Nov 28 18:07:27 crc kubenswrapper[5024]: I1128 18:07:27.549947 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jcs8" event={"ID":"db92d617-2086-418f-8bd8-a76387986e2a","Type":"ContainerStarted","Data":"48ffae4ff11986dd2745d7ee212148129ccab5210924ef95414e1aa4c401deaa"} Nov 28 18:07:27 crc kubenswrapper[5024]: I1128 18:07:27.576328 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7jcs8" podStartSLOduration=3.040343813 podStartE2EDuration="8.576308603s" podCreationTimestamp="2025-11-28 18:07:19 +0000 UTC" firstStartedPulling="2025-11-28 18:07:21.461227588 +0000 UTC m=+4143.510148523" lastFinishedPulling="2025-11-28 18:07:26.997192418 +0000 UTC m=+4149.046113313" observedRunningTime="2025-11-28 18:07:27.569268512 +0000 UTC m=+4149.618189427" watchObservedRunningTime="2025-11-28 18:07:27.576308603 +0000 UTC m=+4149.625229518" Nov 28 18:07:29 crc kubenswrapper[5024]: I1128 18:07:29.913865 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:29 crc kubenswrapper[5024]: I1128 18:07:29.915306 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:30 crc kubenswrapper[5024]: I1128 18:07:30.991843 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7jcs8" podUID="db92d617-2086-418f-8bd8-a76387986e2a" containerName="registry-server" probeResult="failure" output=< Nov 28 18:07:30 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 18:07:30 crc kubenswrapper[5024]: > Nov 28 18:07:37 crc kubenswrapper[5024]: I1128 18:07:37.564364 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:07:37 crc kubenswrapper[5024]: I1128 18:07:37.564848 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:07:39 crc kubenswrapper[5024]: I1128 18:07:39.966894 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:40 crc kubenswrapper[5024]: I1128 18:07:40.015763 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:41 crc kubenswrapper[5024]: I1128 18:07:41.763533 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7jcs8"] Nov 28 18:07:41 crc kubenswrapper[5024]: I1128 18:07:41.764006 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7jcs8" podUID="db92d617-2086-418f-8bd8-a76387986e2a" containerName="registry-server" containerID="cri-o://48ffae4ff11986dd2745d7ee212148129ccab5210924ef95414e1aa4c401deaa" gracePeriod=2 Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.385400 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.405260 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db92d617-2086-418f-8bd8-a76387986e2a-catalog-content\") pod \"db92d617-2086-418f-8bd8-a76387986e2a\" (UID: \"db92d617-2086-418f-8bd8-a76387986e2a\") " Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.405311 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db92d617-2086-418f-8bd8-a76387986e2a-utilities\") pod \"db92d617-2086-418f-8bd8-a76387986e2a\" (UID: \"db92d617-2086-418f-8bd8-a76387986e2a\") " Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.405340 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8hzj\" (UniqueName: \"kubernetes.io/projected/db92d617-2086-418f-8bd8-a76387986e2a-kube-api-access-n8hzj\") pod \"db92d617-2086-418f-8bd8-a76387986e2a\" (UID: \"db92d617-2086-418f-8bd8-a76387986e2a\") " Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.407382 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db92d617-2086-418f-8bd8-a76387986e2a-utilities" (OuterVolumeSpecName: "utilities") pod "db92d617-2086-418f-8bd8-a76387986e2a" (UID: "db92d617-2086-418f-8bd8-a76387986e2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.412231 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db92d617-2086-418f-8bd8-a76387986e2a-kube-api-access-n8hzj" (OuterVolumeSpecName: "kube-api-access-n8hzj") pod "db92d617-2086-418f-8bd8-a76387986e2a" (UID: "db92d617-2086-418f-8bd8-a76387986e2a"). InnerVolumeSpecName "kube-api-access-n8hzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.510357 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db92d617-2086-418f-8bd8-a76387986e2a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.510638 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8hzj\" (UniqueName: \"kubernetes.io/projected/db92d617-2086-418f-8bd8-a76387986e2a-kube-api-access-n8hzj\") on node \"crc\" DevicePath \"\"" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.523579 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db92d617-2086-418f-8bd8-a76387986e2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db92d617-2086-418f-8bd8-a76387986e2a" (UID: "db92d617-2086-418f-8bd8-a76387986e2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.613176 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db92d617-2086-418f-8bd8-a76387986e2a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.724959 5024 generic.go:334] "Generic (PLEG): container finished" podID="db92d617-2086-418f-8bd8-a76387986e2a" containerID="48ffae4ff11986dd2745d7ee212148129ccab5210924ef95414e1aa4c401deaa" exitCode=0 Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.725031 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jcs8" event={"ID":"db92d617-2086-418f-8bd8-a76387986e2a","Type":"ContainerDied","Data":"48ffae4ff11986dd2745d7ee212148129ccab5210924ef95414e1aa4c401deaa"} Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.725165 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jcs8" event={"ID":"db92d617-2086-418f-8bd8-a76387986e2a","Type":"ContainerDied","Data":"4d3594c83ab1cf27707f54886d067004155e5100cd92bb812f8dbf18c203f8c4"} Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.725194 5024 scope.go:117] "RemoveContainer" containerID="48ffae4ff11986dd2745d7ee212148129ccab5210924ef95414e1aa4c401deaa" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.725448 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jcs8" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.767479 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7jcs8"] Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.772684 5024 scope.go:117] "RemoveContainer" containerID="2b18624260ea25ddeae03093513508aa7590a8b6f647de35b223167ea4cd9b05" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.777622 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7jcs8"] Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.808092 5024 scope.go:117] "RemoveContainer" containerID="ac3fa9558a3405ccd4dfa56a5804311c9fd41b3d1f1ddc6a8a3eacbe1bb3168a" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.873493 5024 scope.go:117] "RemoveContainer" containerID="48ffae4ff11986dd2745d7ee212148129ccab5210924ef95414e1aa4c401deaa" Nov 28 18:07:42 crc kubenswrapper[5024]: E1128 18:07:42.874104 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48ffae4ff11986dd2745d7ee212148129ccab5210924ef95414e1aa4c401deaa\": container with ID starting with 48ffae4ff11986dd2745d7ee212148129ccab5210924ef95414e1aa4c401deaa not found: ID does not exist" containerID="48ffae4ff11986dd2745d7ee212148129ccab5210924ef95414e1aa4c401deaa" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.874193 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48ffae4ff11986dd2745d7ee212148129ccab5210924ef95414e1aa4c401deaa"} err="failed to get container status \"48ffae4ff11986dd2745d7ee212148129ccab5210924ef95414e1aa4c401deaa\": rpc error: code = NotFound desc = could not find container \"48ffae4ff11986dd2745d7ee212148129ccab5210924ef95414e1aa4c401deaa\": container with ID starting with 48ffae4ff11986dd2745d7ee212148129ccab5210924ef95414e1aa4c401deaa not found: ID does not exist" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.874250 5024 scope.go:117] "RemoveContainer" containerID="2b18624260ea25ddeae03093513508aa7590a8b6f647de35b223167ea4cd9b05" Nov 28 18:07:42 crc kubenswrapper[5024]: E1128 18:07:42.874862 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b18624260ea25ddeae03093513508aa7590a8b6f647de35b223167ea4cd9b05\": container with ID starting with 2b18624260ea25ddeae03093513508aa7590a8b6f647de35b223167ea4cd9b05 not found: ID does not exist" containerID="2b18624260ea25ddeae03093513508aa7590a8b6f647de35b223167ea4cd9b05" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.874950 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b18624260ea25ddeae03093513508aa7590a8b6f647de35b223167ea4cd9b05"} err="failed to get container status \"2b18624260ea25ddeae03093513508aa7590a8b6f647de35b223167ea4cd9b05\": rpc error: code = NotFound desc = could not find container \"2b18624260ea25ddeae03093513508aa7590a8b6f647de35b223167ea4cd9b05\": container with ID starting with 2b18624260ea25ddeae03093513508aa7590a8b6f647de35b223167ea4cd9b05 not found: ID does not exist" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.874988 5024 scope.go:117] "RemoveContainer" containerID="ac3fa9558a3405ccd4dfa56a5804311c9fd41b3d1f1ddc6a8a3eacbe1bb3168a" Nov 28 18:07:42 crc kubenswrapper[5024]: E1128 18:07:42.876286 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac3fa9558a3405ccd4dfa56a5804311c9fd41b3d1f1ddc6a8a3eacbe1bb3168a\": container with ID starting with ac3fa9558a3405ccd4dfa56a5804311c9fd41b3d1f1ddc6a8a3eacbe1bb3168a not found: ID does not exist" containerID="ac3fa9558a3405ccd4dfa56a5804311c9fd41b3d1f1ddc6a8a3eacbe1bb3168a" Nov 28 18:07:42 crc kubenswrapper[5024]: I1128 18:07:42.876342 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac3fa9558a3405ccd4dfa56a5804311c9fd41b3d1f1ddc6a8a3eacbe1bb3168a"} err="failed to get container status \"ac3fa9558a3405ccd4dfa56a5804311c9fd41b3d1f1ddc6a8a3eacbe1bb3168a\": rpc error: code = NotFound desc = could not find container \"ac3fa9558a3405ccd4dfa56a5804311c9fd41b3d1f1ddc6a8a3eacbe1bb3168a\": container with ID starting with ac3fa9558a3405ccd4dfa56a5804311c9fd41b3d1f1ddc6a8a3eacbe1bb3168a not found: ID does not exist" Nov 28 18:07:44 crc kubenswrapper[5024]: I1128 18:07:44.510352 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db92d617-2086-418f-8bd8-a76387986e2a" path="/var/lib/kubelet/pods/db92d617-2086-418f-8bd8-a76387986e2a/volumes" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.064814 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lgp6h"] Nov 28 18:08:04 crc kubenswrapper[5024]: E1128 18:08:04.065750 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db92d617-2086-418f-8bd8-a76387986e2a" containerName="extract-utilities" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.065763 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="db92d617-2086-418f-8bd8-a76387986e2a" containerName="extract-utilities" Nov 28 18:08:04 crc kubenswrapper[5024]: E1128 18:08:04.065777 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db92d617-2086-418f-8bd8-a76387986e2a" containerName="registry-server" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.065790 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="db92d617-2086-418f-8bd8-a76387986e2a" containerName="registry-server" Nov 28 18:08:04 crc kubenswrapper[5024]: E1128 18:08:04.065811 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db92d617-2086-418f-8bd8-a76387986e2a" containerName="extract-content" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.065816 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="db92d617-2086-418f-8bd8-a76387986e2a" containerName="extract-content" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.066157 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="db92d617-2086-418f-8bd8-a76387986e2a" containerName="registry-server" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.068504 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.081110 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lgp6h"] Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.151803 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgww9\" (UniqueName: \"kubernetes.io/projected/e8017402-505f-4ceb-a2dd-19f49138530f-kube-api-access-hgww9\") pod \"community-operators-lgp6h\" (UID: \"e8017402-505f-4ceb-a2dd-19f49138530f\") " pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.151899 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8017402-505f-4ceb-a2dd-19f49138530f-catalog-content\") pod \"community-operators-lgp6h\" (UID: \"e8017402-505f-4ceb-a2dd-19f49138530f\") " pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.152100 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8017402-505f-4ceb-a2dd-19f49138530f-utilities\") pod \"community-operators-lgp6h\" (UID: \"e8017402-505f-4ceb-a2dd-19f49138530f\") " pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.255148 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8017402-505f-4ceb-a2dd-19f49138530f-utilities\") pod \"community-operators-lgp6h\" (UID: \"e8017402-505f-4ceb-a2dd-19f49138530f\") " pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.255310 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgww9\" (UniqueName: \"kubernetes.io/projected/e8017402-505f-4ceb-a2dd-19f49138530f-kube-api-access-hgww9\") pod \"community-operators-lgp6h\" (UID: \"e8017402-505f-4ceb-a2dd-19f49138530f\") " pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.255374 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8017402-505f-4ceb-a2dd-19f49138530f-catalog-content\") pod \"community-operators-lgp6h\" (UID: \"e8017402-505f-4ceb-a2dd-19f49138530f\") " pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.257096 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8017402-505f-4ceb-a2dd-19f49138530f-catalog-content\") pod \"community-operators-lgp6h\" (UID: \"e8017402-505f-4ceb-a2dd-19f49138530f\") " pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.257782 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8017402-505f-4ceb-a2dd-19f49138530f-utilities\") pod \"community-operators-lgp6h\" (UID: \"e8017402-505f-4ceb-a2dd-19f49138530f\") " pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.280674 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgww9\" (UniqueName: \"kubernetes.io/projected/e8017402-505f-4ceb-a2dd-19f49138530f-kube-api-access-hgww9\") pod \"community-operators-lgp6h\" (UID: \"e8017402-505f-4ceb-a2dd-19f49138530f\") " pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:04 crc kubenswrapper[5024]: I1128 18:08:04.405754 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:05 crc kubenswrapper[5024]: I1128 18:08:05.046155 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lgp6h"] Nov 28 18:08:05 crc kubenswrapper[5024]: I1128 18:08:05.982254 5024 generic.go:334] "Generic (PLEG): container finished" podID="e8017402-505f-4ceb-a2dd-19f49138530f" containerID="b444ff9f68b91f0c62b5589875439cab2c9824a9ae40fbbfd85b930a2ab90793" exitCode=0 Nov 28 18:08:05 crc kubenswrapper[5024]: I1128 18:08:05.982317 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgp6h" event={"ID":"e8017402-505f-4ceb-a2dd-19f49138530f","Type":"ContainerDied","Data":"b444ff9f68b91f0c62b5589875439cab2c9824a9ae40fbbfd85b930a2ab90793"} Nov 28 18:08:05 crc kubenswrapper[5024]: I1128 18:08:05.982732 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgp6h" event={"ID":"e8017402-505f-4ceb-a2dd-19f49138530f","Type":"ContainerStarted","Data":"5af8fc78f3d80120a2f32d67c07a9a63c9e746551d71b7706cf29ed443ac6829"} Nov 28 18:08:07 crc kubenswrapper[5024]: I1128 18:08:07.564688 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:08:07 crc kubenswrapper[5024]: I1128 18:08:07.565121 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:08:07 crc kubenswrapper[5024]: I1128 18:08:07.565175 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 18:08:07 crc kubenswrapper[5024]: I1128 18:08:07.566281 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5badbbac41c54237c3ecda45bd378943c00ae3e6a05816f76e05c62d8cb043e1"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 18:08:07 crc kubenswrapper[5024]: I1128 18:08:07.566337 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://5badbbac41c54237c3ecda45bd378943c00ae3e6a05816f76e05c62d8cb043e1" gracePeriod=600 Nov 28 18:08:08 crc kubenswrapper[5024]: I1128 18:08:08.006152 5024 generic.go:334] "Generic (PLEG): container finished" podID="e8017402-505f-4ceb-a2dd-19f49138530f" containerID="f7e2e6fdbb86eaf43ed553aa20870ec3e72ddcd30c9580ab6b7ce82ce07c98c3" exitCode=0 Nov 28 18:08:08 crc kubenswrapper[5024]: I1128 18:08:08.006240 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgp6h" event={"ID":"e8017402-505f-4ceb-a2dd-19f49138530f","Type":"ContainerDied","Data":"f7e2e6fdbb86eaf43ed553aa20870ec3e72ddcd30c9580ab6b7ce82ce07c98c3"} Nov 28 18:08:08 crc kubenswrapper[5024]: I1128 18:08:08.010553 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="5badbbac41c54237c3ecda45bd378943c00ae3e6a05816f76e05c62d8cb043e1" exitCode=0 Nov 28 18:08:08 crc kubenswrapper[5024]: I1128 18:08:08.010606 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"5badbbac41c54237c3ecda45bd378943c00ae3e6a05816f76e05c62d8cb043e1"} Nov 28 18:08:08 crc kubenswrapper[5024]: I1128 18:08:08.010639 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74"} Nov 28 18:08:08 crc kubenswrapper[5024]: I1128 18:08:08.010658 5024 scope.go:117] "RemoveContainer" containerID="fdd3da441d41562683986186b9bafa716dbd3fd255efbed77346b59ca096bd24" Nov 28 18:08:10 crc kubenswrapper[5024]: I1128 18:08:10.047324 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgp6h" event={"ID":"e8017402-505f-4ceb-a2dd-19f49138530f","Type":"ContainerStarted","Data":"0588d673394d8b534e69199d6caa161b6010e6e8741286b49d972af029c2a1c8"} Nov 28 18:08:10 crc kubenswrapper[5024]: I1128 18:08:10.066916 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lgp6h" podStartSLOduration=3.484333647 podStartE2EDuration="6.066898748s" podCreationTimestamp="2025-11-28 18:08:04 +0000 UTC" firstStartedPulling="2025-11-28 18:08:05.98505947 +0000 UTC m=+4188.033980375" lastFinishedPulling="2025-11-28 18:08:08.567624571 +0000 UTC m=+4190.616545476" observedRunningTime="2025-11-28 18:08:10.066046134 +0000 UTC m=+4192.114967059" watchObservedRunningTime="2025-11-28 18:08:10.066898748 +0000 UTC m=+4192.115819653" Nov 28 18:08:14 crc kubenswrapper[5024]: I1128 18:08:14.406672 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:14 crc kubenswrapper[5024]: I1128 18:08:14.406998 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:14 crc kubenswrapper[5024]: I1128 18:08:14.455212 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:15 crc kubenswrapper[5024]: I1128 18:08:15.146266 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:15 crc kubenswrapper[5024]: I1128 18:08:15.195332 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lgp6h"] Nov 28 18:08:17 crc kubenswrapper[5024]: I1128 18:08:17.128741 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lgp6h" podUID="e8017402-505f-4ceb-a2dd-19f49138530f" containerName="registry-server" containerID="cri-o://0588d673394d8b534e69199d6caa161b6010e6e8741286b49d972af029c2a1c8" gracePeriod=2 Nov 28 18:08:17 crc kubenswrapper[5024]: I1128 18:08:17.671763 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:17 crc kubenswrapper[5024]: I1128 18:08:17.848258 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8017402-505f-4ceb-a2dd-19f49138530f-utilities\") pod \"e8017402-505f-4ceb-a2dd-19f49138530f\" (UID: \"e8017402-505f-4ceb-a2dd-19f49138530f\") " Nov 28 18:08:17 crc kubenswrapper[5024]: I1128 18:08:17.848870 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgww9\" (UniqueName: \"kubernetes.io/projected/e8017402-505f-4ceb-a2dd-19f49138530f-kube-api-access-hgww9\") pod \"e8017402-505f-4ceb-a2dd-19f49138530f\" (UID: \"e8017402-505f-4ceb-a2dd-19f49138530f\") " Nov 28 18:08:17 crc kubenswrapper[5024]: I1128 18:08:17.849131 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8017402-505f-4ceb-a2dd-19f49138530f-catalog-content\") pod \"e8017402-505f-4ceb-a2dd-19f49138530f\" (UID: \"e8017402-505f-4ceb-a2dd-19f49138530f\") " Nov 28 18:08:17 crc kubenswrapper[5024]: I1128 18:08:17.849661 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8017402-505f-4ceb-a2dd-19f49138530f-utilities" (OuterVolumeSpecName: "utilities") pod "e8017402-505f-4ceb-a2dd-19f49138530f" (UID: "e8017402-505f-4ceb-a2dd-19f49138530f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:08:17 crc kubenswrapper[5024]: I1128 18:08:17.850272 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8017402-505f-4ceb-a2dd-19f49138530f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:08:17 crc kubenswrapper[5024]: I1128 18:08:17.855102 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8017402-505f-4ceb-a2dd-19f49138530f-kube-api-access-hgww9" (OuterVolumeSpecName: "kube-api-access-hgww9") pod "e8017402-505f-4ceb-a2dd-19f49138530f" (UID: "e8017402-505f-4ceb-a2dd-19f49138530f"). InnerVolumeSpecName "kube-api-access-hgww9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:08:17 crc kubenswrapper[5024]: I1128 18:08:17.916830 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8017402-505f-4ceb-a2dd-19f49138530f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8017402-505f-4ceb-a2dd-19f49138530f" (UID: "e8017402-505f-4ceb-a2dd-19f49138530f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:08:17 crc kubenswrapper[5024]: I1128 18:08:17.951655 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgww9\" (UniqueName: \"kubernetes.io/projected/e8017402-505f-4ceb-a2dd-19f49138530f-kube-api-access-hgww9\") on node \"crc\" DevicePath \"\"" Nov 28 18:08:17 crc kubenswrapper[5024]: I1128 18:08:17.951944 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8017402-505f-4ceb-a2dd-19f49138530f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.142408 5024 generic.go:334] "Generic (PLEG): container finished" podID="e8017402-505f-4ceb-a2dd-19f49138530f" containerID="0588d673394d8b534e69199d6caa161b6010e6e8741286b49d972af029c2a1c8" exitCode=0 Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.142467 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgp6h" Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.142471 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgp6h" event={"ID":"e8017402-505f-4ceb-a2dd-19f49138530f","Type":"ContainerDied","Data":"0588d673394d8b534e69199d6caa161b6010e6e8741286b49d972af029c2a1c8"} Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.142545 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgp6h" event={"ID":"e8017402-505f-4ceb-a2dd-19f49138530f","Type":"ContainerDied","Data":"5af8fc78f3d80120a2f32d67c07a9a63c9e746551d71b7706cf29ed443ac6829"} Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.142578 5024 scope.go:117] "RemoveContainer" containerID="0588d673394d8b534e69199d6caa161b6010e6e8741286b49d972af029c2a1c8" Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.168678 5024 scope.go:117] "RemoveContainer" containerID="f7e2e6fdbb86eaf43ed553aa20870ec3e72ddcd30c9580ab6b7ce82ce07c98c3" Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.195772 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lgp6h"] Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.217177 5024 scope.go:117] "RemoveContainer" containerID="b444ff9f68b91f0c62b5589875439cab2c9824a9ae40fbbfd85b930a2ab90793" Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.225703 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lgp6h"] Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.260123 5024 scope.go:117] "RemoveContainer" containerID="0588d673394d8b534e69199d6caa161b6010e6e8741286b49d972af029c2a1c8" Nov 28 18:08:18 crc kubenswrapper[5024]: E1128 18:08:18.260572 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0588d673394d8b534e69199d6caa161b6010e6e8741286b49d972af029c2a1c8\": container with ID starting with 0588d673394d8b534e69199d6caa161b6010e6e8741286b49d972af029c2a1c8 not found: ID does not exist" containerID="0588d673394d8b534e69199d6caa161b6010e6e8741286b49d972af029c2a1c8" Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.260618 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0588d673394d8b534e69199d6caa161b6010e6e8741286b49d972af029c2a1c8"} err="failed to get container status \"0588d673394d8b534e69199d6caa161b6010e6e8741286b49d972af029c2a1c8\": rpc error: code = NotFound desc = could not find container \"0588d673394d8b534e69199d6caa161b6010e6e8741286b49d972af029c2a1c8\": container with ID starting with 0588d673394d8b534e69199d6caa161b6010e6e8741286b49d972af029c2a1c8 not found: ID does not exist" Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.260647 5024 scope.go:117] "RemoveContainer" containerID="f7e2e6fdbb86eaf43ed553aa20870ec3e72ddcd30c9580ab6b7ce82ce07c98c3" Nov 28 18:08:18 crc kubenswrapper[5024]: E1128 18:08:18.260974 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e2e6fdbb86eaf43ed553aa20870ec3e72ddcd30c9580ab6b7ce82ce07c98c3\": container with ID starting with f7e2e6fdbb86eaf43ed553aa20870ec3e72ddcd30c9580ab6b7ce82ce07c98c3 not found: ID does not exist" containerID="f7e2e6fdbb86eaf43ed553aa20870ec3e72ddcd30c9580ab6b7ce82ce07c98c3" Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.261048 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e2e6fdbb86eaf43ed553aa20870ec3e72ddcd30c9580ab6b7ce82ce07c98c3"} err="failed to get container status \"f7e2e6fdbb86eaf43ed553aa20870ec3e72ddcd30c9580ab6b7ce82ce07c98c3\": rpc error: code = NotFound desc = could not find container \"f7e2e6fdbb86eaf43ed553aa20870ec3e72ddcd30c9580ab6b7ce82ce07c98c3\": container with ID starting with f7e2e6fdbb86eaf43ed553aa20870ec3e72ddcd30c9580ab6b7ce82ce07c98c3 not found: ID does not exist" Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.261073 5024 scope.go:117] "RemoveContainer" containerID="b444ff9f68b91f0c62b5589875439cab2c9824a9ae40fbbfd85b930a2ab90793" Nov 28 18:08:18 crc kubenswrapper[5024]: E1128 18:08:18.261426 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b444ff9f68b91f0c62b5589875439cab2c9824a9ae40fbbfd85b930a2ab90793\": container with ID starting with b444ff9f68b91f0c62b5589875439cab2c9824a9ae40fbbfd85b930a2ab90793 not found: ID does not exist" containerID="b444ff9f68b91f0c62b5589875439cab2c9824a9ae40fbbfd85b930a2ab90793" Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.261485 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b444ff9f68b91f0c62b5589875439cab2c9824a9ae40fbbfd85b930a2ab90793"} err="failed to get container status \"b444ff9f68b91f0c62b5589875439cab2c9824a9ae40fbbfd85b930a2ab90793\": rpc error: code = NotFound desc = could not find container \"b444ff9f68b91f0c62b5589875439cab2c9824a9ae40fbbfd85b930a2ab90793\": container with ID starting with b444ff9f68b91f0c62b5589875439cab2c9824a9ae40fbbfd85b930a2ab90793 not found: ID does not exist" Nov 28 18:08:18 crc kubenswrapper[5024]: I1128 18:08:18.522259 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8017402-505f-4ceb-a2dd-19f49138530f" path="/var/lib/kubelet/pods/e8017402-505f-4ceb-a2dd-19f49138530f/volumes" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.790397 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w8zrb"] Nov 28 18:08:29 crc kubenswrapper[5024]: E1128 18:08:29.791550 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8017402-505f-4ceb-a2dd-19f49138530f" containerName="registry-server" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.791565 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8017402-505f-4ceb-a2dd-19f49138530f" containerName="registry-server" Nov 28 18:08:29 crc kubenswrapper[5024]: E1128 18:08:29.791605 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8017402-505f-4ceb-a2dd-19f49138530f" containerName="extract-content" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.791613 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8017402-505f-4ceb-a2dd-19f49138530f" containerName="extract-content" Nov 28 18:08:29 crc kubenswrapper[5024]: E1128 18:08:29.791629 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8017402-505f-4ceb-a2dd-19f49138530f" containerName="extract-utilities" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.791635 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8017402-505f-4ceb-a2dd-19f49138530f" containerName="extract-utilities" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.791874 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8017402-505f-4ceb-a2dd-19f49138530f" containerName="registry-server" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.795953 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.815985 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpwzf\" (UniqueName: \"kubernetes.io/projected/0ec4581f-84b6-4882-a1be-cdf2aaaea941-kube-api-access-zpwzf\") pod \"redhat-marketplace-w8zrb\" (UID: \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\") " pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.816034 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec4581f-84b6-4882-a1be-cdf2aaaea941-catalog-content\") pod \"redhat-marketplace-w8zrb\" (UID: \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\") " pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.816151 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec4581f-84b6-4882-a1be-cdf2aaaea941-utilities\") pod \"redhat-marketplace-w8zrb\" (UID: \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\") " pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.819977 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8zrb"] Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.917122 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpwzf\" (UniqueName: \"kubernetes.io/projected/0ec4581f-84b6-4882-a1be-cdf2aaaea941-kube-api-access-zpwzf\") pod \"redhat-marketplace-w8zrb\" (UID: \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\") " pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.917173 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec4581f-84b6-4882-a1be-cdf2aaaea941-catalog-content\") pod \"redhat-marketplace-w8zrb\" (UID: \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\") " pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.917283 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec4581f-84b6-4882-a1be-cdf2aaaea941-utilities\") pod \"redhat-marketplace-w8zrb\" (UID: \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\") " pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.917688 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec4581f-84b6-4882-a1be-cdf2aaaea941-catalog-content\") pod \"redhat-marketplace-w8zrb\" (UID: \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\") " pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.917852 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec4581f-84b6-4882-a1be-cdf2aaaea941-utilities\") pod \"redhat-marketplace-w8zrb\" (UID: \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\") " pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:29 crc kubenswrapper[5024]: I1128 18:08:29.953472 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpwzf\" (UniqueName: \"kubernetes.io/projected/0ec4581f-84b6-4882-a1be-cdf2aaaea941-kube-api-access-zpwzf\") pod \"redhat-marketplace-w8zrb\" (UID: \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\") " pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:30 crc kubenswrapper[5024]: I1128 18:08:30.132626 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:30 crc kubenswrapper[5024]: I1128 18:08:30.684763 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8zrb"] Nov 28 18:08:31 crc kubenswrapper[5024]: I1128 18:08:31.288965 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8zrb" event={"ID":"0ec4581f-84b6-4882-a1be-cdf2aaaea941","Type":"ContainerStarted","Data":"4c7de4c16d6cc078879083b3d127efcf25d306c13e522330ac486c9f155909e3"} Nov 28 18:08:31 crc kubenswrapper[5024]: I1128 18:08:31.289014 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8zrb" event={"ID":"0ec4581f-84b6-4882-a1be-cdf2aaaea941","Type":"ContainerStarted","Data":"5fe9bd0ae82082a384ea4a606d467a8025cb31d8e2c8865de026f4a530620307"} Nov 28 18:08:32 crc kubenswrapper[5024]: I1128 18:08:32.300709 5024 generic.go:334] "Generic (PLEG): container finished" podID="0ec4581f-84b6-4882-a1be-cdf2aaaea941" containerID="4c7de4c16d6cc078879083b3d127efcf25d306c13e522330ac486c9f155909e3" exitCode=0 Nov 28 18:08:32 crc kubenswrapper[5024]: I1128 18:08:32.301904 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8zrb" event={"ID":"0ec4581f-84b6-4882-a1be-cdf2aaaea941","Type":"ContainerDied","Data":"4c7de4c16d6cc078879083b3d127efcf25d306c13e522330ac486c9f155909e3"} Nov 28 18:08:34 crc kubenswrapper[5024]: I1128 18:08:34.322861 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8zrb" event={"ID":"0ec4581f-84b6-4882-a1be-cdf2aaaea941","Type":"ContainerStarted","Data":"624d48afc191705cc3043c2ec27643975a8e0885c4eebbbf98d47f16196ea016"} Nov 28 18:08:35 crc kubenswrapper[5024]: I1128 18:08:35.335349 5024 generic.go:334] "Generic (PLEG): container finished" podID="0ec4581f-84b6-4882-a1be-cdf2aaaea941" containerID="624d48afc191705cc3043c2ec27643975a8e0885c4eebbbf98d47f16196ea016" exitCode=0 Nov 28 18:08:35 crc kubenswrapper[5024]: I1128 18:08:35.335442 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8zrb" event={"ID":"0ec4581f-84b6-4882-a1be-cdf2aaaea941","Type":"ContainerDied","Data":"624d48afc191705cc3043c2ec27643975a8e0885c4eebbbf98d47f16196ea016"} Nov 28 18:08:37 crc kubenswrapper[5024]: I1128 18:08:37.358068 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8zrb" event={"ID":"0ec4581f-84b6-4882-a1be-cdf2aaaea941","Type":"ContainerStarted","Data":"1df32e30e8945597fd04336fc34e55872b78c073771f55b711a443833dfa18c6"} Nov 28 18:08:40 crc kubenswrapper[5024]: I1128 18:08:40.133391 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:40 crc kubenswrapper[5024]: I1128 18:08:40.133949 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:40 crc kubenswrapper[5024]: I1128 18:08:40.199671 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:40 crc kubenswrapper[5024]: I1128 18:08:40.221233 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w8zrb" podStartSLOduration=7.418583493 podStartE2EDuration="11.22121624s" podCreationTimestamp="2025-11-28 18:08:29 +0000 UTC" firstStartedPulling="2025-11-28 18:08:32.304955758 +0000 UTC m=+4214.353876663" lastFinishedPulling="2025-11-28 18:08:36.107588475 +0000 UTC m=+4218.156509410" observedRunningTime="2025-11-28 18:08:37.381492415 +0000 UTC m=+4219.430413320" watchObservedRunningTime="2025-11-28 18:08:40.22121624 +0000 UTC m=+4222.270137145" Nov 28 18:08:50 crc kubenswrapper[5024]: I1128 18:08:50.191760 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:50 crc kubenswrapper[5024]: I1128 18:08:50.249224 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8zrb"] Nov 28 18:08:50 crc kubenswrapper[5024]: I1128 18:08:50.490802 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w8zrb" podUID="0ec4581f-84b6-4882-a1be-cdf2aaaea941" containerName="registry-server" containerID="cri-o://1df32e30e8945597fd04336fc34e55872b78c073771f55b711a443833dfa18c6" gracePeriod=2 Nov 28 18:08:50 crc kubenswrapper[5024]: I1128 18:08:50.971429 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.101108 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec4581f-84b6-4882-a1be-cdf2aaaea941-catalog-content\") pod \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\" (UID: \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\") " Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.101457 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec4581f-84b6-4882-a1be-cdf2aaaea941-utilities\") pod \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\" (UID: \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\") " Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.101537 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpwzf\" (UniqueName: \"kubernetes.io/projected/0ec4581f-84b6-4882-a1be-cdf2aaaea941-kube-api-access-zpwzf\") pod \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\" (UID: \"0ec4581f-84b6-4882-a1be-cdf2aaaea941\") " Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.102788 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec4581f-84b6-4882-a1be-cdf2aaaea941-utilities" (OuterVolumeSpecName: "utilities") pod "0ec4581f-84b6-4882-a1be-cdf2aaaea941" (UID: "0ec4581f-84b6-4882-a1be-cdf2aaaea941"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.107655 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec4581f-84b6-4882-a1be-cdf2aaaea941-kube-api-access-zpwzf" (OuterVolumeSpecName: "kube-api-access-zpwzf") pod "0ec4581f-84b6-4882-a1be-cdf2aaaea941" (UID: "0ec4581f-84b6-4882-a1be-cdf2aaaea941"). InnerVolumeSpecName "kube-api-access-zpwzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.132719 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec4581f-84b6-4882-a1be-cdf2aaaea941-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ec4581f-84b6-4882-a1be-cdf2aaaea941" (UID: "0ec4581f-84b6-4882-a1be-cdf2aaaea941"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.204702 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec4581f-84b6-4882-a1be-cdf2aaaea941-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.204740 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec4581f-84b6-4882-a1be-cdf2aaaea941-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.204751 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpwzf\" (UniqueName: \"kubernetes.io/projected/0ec4581f-84b6-4882-a1be-cdf2aaaea941-kube-api-access-zpwzf\") on node \"crc\" DevicePath \"\"" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.502005 5024 generic.go:334] "Generic (PLEG): container finished" podID="0ec4581f-84b6-4882-a1be-cdf2aaaea941" containerID="1df32e30e8945597fd04336fc34e55872b78c073771f55b711a443833dfa18c6" exitCode=0 Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.502075 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8zrb" event={"ID":"0ec4581f-84b6-4882-a1be-cdf2aaaea941","Type":"ContainerDied","Data":"1df32e30e8945597fd04336fc34e55872b78c073771f55b711a443833dfa18c6"} Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.502087 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8zrb" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.502112 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8zrb" event={"ID":"0ec4581f-84b6-4882-a1be-cdf2aaaea941","Type":"ContainerDied","Data":"5fe9bd0ae82082a384ea4a606d467a8025cb31d8e2c8865de026f4a530620307"} Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.502130 5024 scope.go:117] "RemoveContainer" containerID="1df32e30e8945597fd04336fc34e55872b78c073771f55b711a443833dfa18c6" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.533954 5024 scope.go:117] "RemoveContainer" containerID="624d48afc191705cc3043c2ec27643975a8e0885c4eebbbf98d47f16196ea016" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.541811 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8zrb"] Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.554581 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8zrb"] Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.565434 5024 scope.go:117] "RemoveContainer" containerID="4c7de4c16d6cc078879083b3d127efcf25d306c13e522330ac486c9f155909e3" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.639496 5024 scope.go:117] "RemoveContainer" containerID="1df32e30e8945597fd04336fc34e55872b78c073771f55b711a443833dfa18c6" Nov 28 18:08:51 crc kubenswrapper[5024]: E1128 18:08:51.646822 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1df32e30e8945597fd04336fc34e55872b78c073771f55b711a443833dfa18c6\": container with ID starting with 1df32e30e8945597fd04336fc34e55872b78c073771f55b711a443833dfa18c6 not found: ID does not exist" containerID="1df32e30e8945597fd04336fc34e55872b78c073771f55b711a443833dfa18c6" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.646877 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df32e30e8945597fd04336fc34e55872b78c073771f55b711a443833dfa18c6"} err="failed to get container status \"1df32e30e8945597fd04336fc34e55872b78c073771f55b711a443833dfa18c6\": rpc error: code = NotFound desc = could not find container \"1df32e30e8945597fd04336fc34e55872b78c073771f55b711a443833dfa18c6\": container with ID starting with 1df32e30e8945597fd04336fc34e55872b78c073771f55b711a443833dfa18c6 not found: ID does not exist" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.646915 5024 scope.go:117] "RemoveContainer" containerID="624d48afc191705cc3043c2ec27643975a8e0885c4eebbbf98d47f16196ea016" Nov 28 18:08:51 crc kubenswrapper[5024]: E1128 18:08:51.647362 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"624d48afc191705cc3043c2ec27643975a8e0885c4eebbbf98d47f16196ea016\": container with ID starting with 624d48afc191705cc3043c2ec27643975a8e0885c4eebbbf98d47f16196ea016 not found: ID does not exist" containerID="624d48afc191705cc3043c2ec27643975a8e0885c4eebbbf98d47f16196ea016" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.647396 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"624d48afc191705cc3043c2ec27643975a8e0885c4eebbbf98d47f16196ea016"} err="failed to get container status \"624d48afc191705cc3043c2ec27643975a8e0885c4eebbbf98d47f16196ea016\": rpc error: code = NotFound desc = could not find container \"624d48afc191705cc3043c2ec27643975a8e0885c4eebbbf98d47f16196ea016\": container with ID starting with 624d48afc191705cc3043c2ec27643975a8e0885c4eebbbf98d47f16196ea016 not found: ID does not exist" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.647416 5024 scope.go:117] "RemoveContainer" containerID="4c7de4c16d6cc078879083b3d127efcf25d306c13e522330ac486c9f155909e3" Nov 28 18:08:51 crc kubenswrapper[5024]: E1128 18:08:51.647722 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c7de4c16d6cc078879083b3d127efcf25d306c13e522330ac486c9f155909e3\": container with ID starting with 4c7de4c16d6cc078879083b3d127efcf25d306c13e522330ac486c9f155909e3 not found: ID does not exist" containerID="4c7de4c16d6cc078879083b3d127efcf25d306c13e522330ac486c9f155909e3" Nov 28 18:08:51 crc kubenswrapper[5024]: I1128 18:08:51.647769 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7de4c16d6cc078879083b3d127efcf25d306c13e522330ac486c9f155909e3"} err="failed to get container status \"4c7de4c16d6cc078879083b3d127efcf25d306c13e522330ac486c9f155909e3\": rpc error: code = NotFound desc = could not find container \"4c7de4c16d6cc078879083b3d127efcf25d306c13e522330ac486c9f155909e3\": container with ID starting with 4c7de4c16d6cc078879083b3d127efcf25d306c13e522330ac486c9f155909e3 not found: ID does not exist" Nov 28 18:08:52 crc kubenswrapper[5024]: I1128 18:08:52.513191 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec4581f-84b6-4882-a1be-cdf2aaaea941" path="/var/lib/kubelet/pods/0ec4581f-84b6-4882-a1be-cdf2aaaea941/volumes" Nov 28 18:10:07 crc kubenswrapper[5024]: I1128 18:10:07.564712 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:10:07 crc kubenswrapper[5024]: I1128 18:10:07.565287 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:10:37 crc kubenswrapper[5024]: I1128 18:10:37.564634 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:10:37 crc kubenswrapper[5024]: I1128 18:10:37.565199 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:11:07 crc kubenswrapper[5024]: I1128 18:11:07.564985 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:11:07 crc kubenswrapper[5024]: I1128 18:11:07.565542 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:11:07 crc kubenswrapper[5024]: I1128 18:11:07.565584 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 18:11:07 crc kubenswrapper[5024]: I1128 18:11:07.566423 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 18:11:07 crc kubenswrapper[5024]: I1128 18:11:07.566469 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" gracePeriod=600 Nov 28 18:11:07 crc kubenswrapper[5024]: E1128 18:11:07.748080 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:11:08 crc kubenswrapper[5024]: I1128 18:11:08.028769 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" exitCode=0 Nov 28 18:11:08 crc kubenswrapper[5024]: I1128 18:11:08.028857 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74"} Nov 28 18:11:08 crc kubenswrapper[5024]: I1128 18:11:08.029152 5024 scope.go:117] "RemoveContainer" containerID="5badbbac41c54237c3ecda45bd378943c00ae3e6a05816f76e05c62d8cb043e1" Nov 28 18:11:08 crc kubenswrapper[5024]: I1128 18:11:08.029952 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:11:08 crc kubenswrapper[5024]: E1128 18:11:08.030499 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:11:20 crc kubenswrapper[5024]: I1128 18:11:20.499942 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:11:20 crc kubenswrapper[5024]: E1128 18:11:20.500791 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:11:32 crc kubenswrapper[5024]: E1128 18:11:32.400268 5024 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.141:46780->38.129.56.141:40169: write tcp 38.129.56.141:46780->38.129.56.141:40169: write: broken pipe Nov 28 18:11:32 crc kubenswrapper[5024]: I1128 18:11:32.498472 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:11:32 crc kubenswrapper[5024]: E1128 18:11:32.498858 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:11:47 crc kubenswrapper[5024]: I1128 18:11:47.497752 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:11:47 crc kubenswrapper[5024]: E1128 18:11:47.498761 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:12:01 crc kubenswrapper[5024]: I1128 18:12:01.498575 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:12:01 crc kubenswrapper[5024]: E1128 18:12:01.499356 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:12:12 crc kubenswrapper[5024]: I1128 18:12:12.500775 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:12:12 crc kubenswrapper[5024]: E1128 18:12:12.502484 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:12:26 crc kubenswrapper[5024]: I1128 18:12:26.498181 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:12:26 crc kubenswrapper[5024]: E1128 18:12:26.499124 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:12:38 crc kubenswrapper[5024]: I1128 18:12:38.506463 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:12:38 crc kubenswrapper[5024]: E1128 18:12:38.507389 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:12:52 crc kubenswrapper[5024]: I1128 18:12:52.497759 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:12:52 crc kubenswrapper[5024]: E1128 18:12:52.498560 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:13:05 crc kubenswrapper[5024]: I1128 18:13:05.498734 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:13:05 crc kubenswrapper[5024]: E1128 18:13:05.499904 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:13:20 crc kubenswrapper[5024]: I1128 18:13:20.498341 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:13:20 crc kubenswrapper[5024]: E1128 18:13:20.499248 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:13:34 crc kubenswrapper[5024]: I1128 18:13:34.498630 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:13:34 crc kubenswrapper[5024]: E1128 18:13:34.500375 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:13:47 crc kubenswrapper[5024]: I1128 18:13:47.497717 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:13:47 crc kubenswrapper[5024]: E1128 18:13:47.498506 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:13:59 crc kubenswrapper[5024]: I1128 18:13:59.498554 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:13:59 crc kubenswrapper[5024]: E1128 18:13:59.499436 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:14:11 crc kubenswrapper[5024]: I1128 18:14:11.497869 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:14:11 crc kubenswrapper[5024]: E1128 18:14:11.498717 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:14:23 crc kubenswrapper[5024]: I1128 18:14:23.498968 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:14:23 crc kubenswrapper[5024]: E1128 18:14:23.499845 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:14:37 crc kubenswrapper[5024]: I1128 18:14:37.498761 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:14:37 crc kubenswrapper[5024]: E1128 18:14:37.499581 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:14:50 crc kubenswrapper[5024]: I1128 18:14:50.498785 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:14:50 crc kubenswrapper[5024]: E1128 18:14:50.499892 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.186753 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst"] Nov 28 18:15:00 crc kubenswrapper[5024]: E1128 18:15:00.187764 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec4581f-84b6-4882-a1be-cdf2aaaea941" containerName="registry-server" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.187785 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec4581f-84b6-4882-a1be-cdf2aaaea941" containerName="registry-server" Nov 28 18:15:00 crc kubenswrapper[5024]: E1128 18:15:00.187825 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec4581f-84b6-4882-a1be-cdf2aaaea941" containerName="extract-utilities" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.187834 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec4581f-84b6-4882-a1be-cdf2aaaea941" containerName="extract-utilities" Nov 28 18:15:00 crc kubenswrapper[5024]: E1128 18:15:00.187858 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec4581f-84b6-4882-a1be-cdf2aaaea941" containerName="extract-content" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.187866 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec4581f-84b6-4882-a1be-cdf2aaaea941" containerName="extract-content" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.188186 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec4581f-84b6-4882-a1be-cdf2aaaea941" containerName="registry-server" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.189247 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.196972 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.196972 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.208063 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst"] Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.382258 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4clc\" (UniqueName: \"kubernetes.io/projected/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-kube-api-access-v4clc\") pod \"collect-profiles-29405895-k4qst\" (UID: \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.382728 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-config-volume\") pod \"collect-profiles-29405895-k4qst\" (UID: \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.383039 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-secret-volume\") pod \"collect-profiles-29405895-k4qst\" (UID: \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.484994 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-config-volume\") pod \"collect-profiles-29405895-k4qst\" (UID: \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.485169 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-secret-volume\") pod \"collect-profiles-29405895-k4qst\" (UID: \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.485239 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4clc\" (UniqueName: \"kubernetes.io/projected/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-kube-api-access-v4clc\") pod \"collect-profiles-29405895-k4qst\" (UID: \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.486215 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-config-volume\") pod \"collect-profiles-29405895-k4qst\" (UID: \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.495275 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-secret-volume\") pod \"collect-profiles-29405895-k4qst\" (UID: \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.507655 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4clc\" (UniqueName: \"kubernetes.io/projected/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-kube-api-access-v4clc\") pod \"collect-profiles-29405895-k4qst\" (UID: \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" Nov 28 18:15:00 crc kubenswrapper[5024]: I1128 18:15:00.522923 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" Nov 28 18:15:01 crc kubenswrapper[5024]: I1128 18:15:01.037680 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst"] Nov 28 18:15:01 crc kubenswrapper[5024]: I1128 18:15:01.800569 5024 generic.go:334] "Generic (PLEG): container finished" podID="1d8ef5ba-9401-4321-8f9d-ce3466b70dd3" containerID="99b423412e940adf4ac843e88c700dae14d40d93e59f3ffe923e244bcf079be6" exitCode=0 Nov 28 18:15:01 crc kubenswrapper[5024]: I1128 18:15:01.800768 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" event={"ID":"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3","Type":"ContainerDied","Data":"99b423412e940adf4ac843e88c700dae14d40d93e59f3ffe923e244bcf079be6"} Nov 28 18:15:01 crc kubenswrapper[5024]: I1128 18:15:01.800925 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" event={"ID":"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3","Type":"ContainerStarted","Data":"ea04b0b11240438ca78688786736abe090909f58867ed780cf06299fc6f2a36b"} Nov 28 18:15:02 crc kubenswrapper[5024]: I1128 18:15:02.498774 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:15:02 crc kubenswrapper[5024]: E1128 18:15:02.499456 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:15:04 crc kubenswrapper[5024]: I1128 18:15:04.000451 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" Nov 28 18:15:04 crc kubenswrapper[5024]: I1128 18:15:04.104947 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-config-volume\") pod \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\" (UID: \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\") " Nov 28 18:15:04 crc kubenswrapper[5024]: I1128 18:15:04.105436 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-secret-volume\") pod \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\" (UID: \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\") " Nov 28 18:15:04 crc kubenswrapper[5024]: I1128 18:15:04.105511 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4clc\" (UniqueName: \"kubernetes.io/projected/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-kube-api-access-v4clc\") pod \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\" (UID: \"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3\") " Nov 28 18:15:04 crc kubenswrapper[5024]: I1128 18:15:04.106573 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-config-volume" (OuterVolumeSpecName: "config-volume") pod "1d8ef5ba-9401-4321-8f9d-ce3466b70dd3" (UID: "1d8ef5ba-9401-4321-8f9d-ce3466b70dd3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 18:15:04 crc kubenswrapper[5024]: I1128 18:15:04.111256 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-kube-api-access-v4clc" (OuterVolumeSpecName: "kube-api-access-v4clc") pod "1d8ef5ba-9401-4321-8f9d-ce3466b70dd3" (UID: "1d8ef5ba-9401-4321-8f9d-ce3466b70dd3"). InnerVolumeSpecName "kube-api-access-v4clc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:15:04 crc kubenswrapper[5024]: I1128 18:15:04.111304 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1d8ef5ba-9401-4321-8f9d-ce3466b70dd3" (UID: "1d8ef5ba-9401-4321-8f9d-ce3466b70dd3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 18:15:04 crc kubenswrapper[5024]: I1128 18:15:04.208483 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 18:15:04 crc kubenswrapper[5024]: I1128 18:15:04.208768 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 18:15:04 crc kubenswrapper[5024]: I1128 18:15:04.208845 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4clc\" (UniqueName: \"kubernetes.io/projected/1d8ef5ba-9401-4321-8f9d-ce3466b70dd3-kube-api-access-v4clc\") on node \"crc\" DevicePath \"\"" Nov 28 18:15:04 crc kubenswrapper[5024]: I1128 18:15:04.836667 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" event={"ID":"1d8ef5ba-9401-4321-8f9d-ce3466b70dd3","Type":"ContainerDied","Data":"ea04b0b11240438ca78688786736abe090909f58867ed780cf06299fc6f2a36b"} Nov 28 18:15:04 crc kubenswrapper[5024]: I1128 18:15:04.836697 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405895-k4qst" Nov 28 18:15:04 crc kubenswrapper[5024]: I1128 18:15:04.836706 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea04b0b11240438ca78688786736abe090909f58867ed780cf06299fc6f2a36b" Nov 28 18:15:05 crc kubenswrapper[5024]: I1128 18:15:05.067079 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn"] Nov 28 18:15:05 crc kubenswrapper[5024]: I1128 18:15:05.079680 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405850-ssxdn"] Nov 28 18:15:06 crc kubenswrapper[5024]: I1128 18:15:06.523231 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac17d18-79a2-48b3-8ea9-e1e84f472a51" path="/var/lib/kubelet/pods/8ac17d18-79a2-48b3-8ea9-e1e84f472a51/volumes" Nov 28 18:15:14 crc kubenswrapper[5024]: I1128 18:15:14.499512 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:15:14 crc kubenswrapper[5024]: E1128 18:15:14.500288 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.530679 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 18:15:25 crc kubenswrapper[5024]: E1128 18:15:25.531718 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8ef5ba-9401-4321-8f9d-ce3466b70dd3" containerName="collect-profiles" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.531733 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8ef5ba-9401-4321-8f9d-ce3466b70dd3" containerName="collect-profiles" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.532008 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8ef5ba-9401-4321-8f9d-ce3466b70dd3" containerName="collect-profiles" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.532839 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.536781 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.537498 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-l7s8n" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.537655 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.537796 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.575838 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.601222 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.601300 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/38ea9d2b-3972-4bda-9cdd-c341334be5d1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.601336 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/38ea9d2b-3972-4bda-9cdd-c341334be5d1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.601353 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.601397 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38ea9d2b-3972-4bda-9cdd-c341334be5d1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.601551 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.601576 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.601602 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm6hh\" (UniqueName: \"kubernetes.io/projected/38ea9d2b-3972-4bda-9cdd-c341334be5d1-kube-api-access-cm6hh\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.601630 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38ea9d2b-3972-4bda-9cdd-c341334be5d1-config-data\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.703350 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm6hh\" (UniqueName: \"kubernetes.io/projected/38ea9d2b-3972-4bda-9cdd-c341334be5d1-kube-api-access-cm6hh\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.703412 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38ea9d2b-3972-4bda-9cdd-c341334be5d1-config-data\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.703497 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.703542 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/38ea9d2b-3972-4bda-9cdd-c341334be5d1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.703578 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/38ea9d2b-3972-4bda-9cdd-c341334be5d1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.703596 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.703627 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38ea9d2b-3972-4bda-9cdd-c341334be5d1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.703722 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.703746 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.704908 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38ea9d2b-3972-4bda-9cdd-c341334be5d1-config-data\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.704913 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/38ea9d2b-3972-4bda-9cdd-c341334be5d1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.705654 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/38ea9d2b-3972-4bda-9cdd-c341334be5d1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.705795 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38ea9d2b-3972-4bda-9cdd-c341334be5d1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.706000 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.709541 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.710280 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.711504 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.719420 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm6hh\" (UniqueName: \"kubernetes.io/projected/38ea9d2b-3972-4bda-9cdd-c341334be5d1-kube-api-access-cm6hh\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.773458 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " pod="openstack/tempest-tests-tempest" Nov 28 18:15:25 crc kubenswrapper[5024]: I1128 18:15:25.900842 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 18:15:26 crc kubenswrapper[5024]: I1128 18:15:26.405903 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 18:15:26 crc kubenswrapper[5024]: W1128 18:15:26.407799 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38ea9d2b_3972_4bda_9cdd_c341334be5d1.slice/crio-00ada76ea9d32754f0b7e47f40b9b1a634f4741e46be1b578248f223a2a4bab7 WatchSource:0}: Error finding container 00ada76ea9d32754f0b7e47f40b9b1a634f4741e46be1b578248f223a2a4bab7: Status 404 returned error can't find the container with id 00ada76ea9d32754f0b7e47f40b9b1a634f4741e46be1b578248f223a2a4bab7 Nov 28 18:15:26 crc kubenswrapper[5024]: I1128 18:15:26.410608 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 18:15:27 crc kubenswrapper[5024]: I1128 18:15:27.108974 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"38ea9d2b-3972-4bda-9cdd-c341334be5d1","Type":"ContainerStarted","Data":"00ada76ea9d32754f0b7e47f40b9b1a634f4741e46be1b578248f223a2a4bab7"} Nov 28 18:15:29 crc kubenswrapper[5024]: I1128 18:15:29.498346 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:15:29 crc kubenswrapper[5024]: E1128 18:15:29.499226 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:15:44 crc kubenswrapper[5024]: I1128 18:15:44.499215 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:15:44 crc kubenswrapper[5024]: E1128 18:15:44.500117 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:15:58 crc kubenswrapper[5024]: I1128 18:15:58.270499 5024 scope.go:117] "RemoveContainer" containerID="58f6c9808de01267a71d21d9e7d987d236c2f2fc3c1792f09f44f08e89daee43" Nov 28 18:15:58 crc kubenswrapper[5024]: I1128 18:15:58.507418 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:15:58 crc kubenswrapper[5024]: E1128 18:15:58.507770 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:16:08 crc kubenswrapper[5024]: E1128 18:16:08.790807 5024 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 28 18:16:08 crc kubenswrapper[5024]: E1128 18:16:08.792459 5024 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cm6hh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(38ea9d2b-3972-4bda-9cdd-c341334be5d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 18:16:08 crc kubenswrapper[5024]: E1128 18:16:08.793758 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="38ea9d2b-3972-4bda-9cdd-c341334be5d1" Nov 28 18:16:09 crc kubenswrapper[5024]: E1128 18:16:09.675931 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="38ea9d2b-3972-4bda-9cdd-c341334be5d1" Nov 28 18:16:11 crc kubenswrapper[5024]: I1128 18:16:11.498967 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:16:12 crc kubenswrapper[5024]: I1128 18:16:12.711538 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"d06361962fe509995d04df9a9542446ec780fadff703acb27501511c9c538a1c"} Nov 28 18:16:18 crc kubenswrapper[5024]: I1128 18:16:18.284910 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ndjrg"] Nov 28 18:16:18 crc kubenswrapper[5024]: I1128 18:16:18.296156 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:18 crc kubenswrapper[5024]: I1128 18:16:18.319002 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ndjrg"] Nov 28 18:16:18 crc kubenswrapper[5024]: I1128 18:16:18.451729 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8csjz\" (UniqueName: \"kubernetes.io/projected/c8060a0e-3c7c-4827-91c4-681ed124ffa5-kube-api-access-8csjz\") pod \"certified-operators-ndjrg\" (UID: \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\") " pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:18 crc kubenswrapper[5024]: I1128 18:16:18.451825 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8060a0e-3c7c-4827-91c4-681ed124ffa5-utilities\") pod \"certified-operators-ndjrg\" (UID: \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\") " pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:18 crc kubenswrapper[5024]: I1128 18:16:18.452373 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8060a0e-3c7c-4827-91c4-681ed124ffa5-catalog-content\") pod \"certified-operators-ndjrg\" (UID: \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\") " pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:18 crc kubenswrapper[5024]: I1128 18:16:18.554842 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8060a0e-3c7c-4827-91c4-681ed124ffa5-utilities\") pod \"certified-operators-ndjrg\" (UID: \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\") " pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:18 crc kubenswrapper[5024]: I1128 18:16:18.555071 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8060a0e-3c7c-4827-91c4-681ed124ffa5-catalog-content\") pod \"certified-operators-ndjrg\" (UID: \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\") " pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:18 crc kubenswrapper[5024]: I1128 18:16:18.555213 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8csjz\" (UniqueName: \"kubernetes.io/projected/c8060a0e-3c7c-4827-91c4-681ed124ffa5-kube-api-access-8csjz\") pod \"certified-operators-ndjrg\" (UID: \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\") " pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:18 crc kubenswrapper[5024]: I1128 18:16:18.556131 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8060a0e-3c7c-4827-91c4-681ed124ffa5-utilities\") pod \"certified-operators-ndjrg\" (UID: \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\") " pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:18 crc kubenswrapper[5024]: I1128 18:16:18.556387 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8060a0e-3c7c-4827-91c4-681ed124ffa5-catalog-content\") pod \"certified-operators-ndjrg\" (UID: \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\") " pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:18 crc kubenswrapper[5024]: I1128 18:16:18.584738 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8csjz\" (UniqueName: \"kubernetes.io/projected/c8060a0e-3c7c-4827-91c4-681ed124ffa5-kube-api-access-8csjz\") pod \"certified-operators-ndjrg\" (UID: \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\") " pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:18 crc kubenswrapper[5024]: I1128 18:16:18.627365 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:19 crc kubenswrapper[5024]: I1128 18:16:19.188078 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ndjrg"] Nov 28 18:16:19 crc kubenswrapper[5024]: W1128 18:16:19.194179 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8060a0e_3c7c_4827_91c4_681ed124ffa5.slice/crio-7a61c79f7aabdcd2b89858eecadf3df780b2b97a174540d5982e2aa56c79736a WatchSource:0}: Error finding container 7a61c79f7aabdcd2b89858eecadf3df780b2b97a174540d5982e2aa56c79736a: Status 404 returned error can't find the container with id 7a61c79f7aabdcd2b89858eecadf3df780b2b97a174540d5982e2aa56c79736a Nov 28 18:16:19 crc kubenswrapper[5024]: I1128 18:16:19.783795 5024 generic.go:334] "Generic (PLEG): container finished" podID="c8060a0e-3c7c-4827-91c4-681ed124ffa5" containerID="20f051a5103fc79c77b7917fee56596bfd51a25ff5e48e29f5cff912af15589a" exitCode=0 Nov 28 18:16:19 crc kubenswrapper[5024]: I1128 18:16:19.783877 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndjrg" event={"ID":"c8060a0e-3c7c-4827-91c4-681ed124ffa5","Type":"ContainerDied","Data":"20f051a5103fc79c77b7917fee56596bfd51a25ff5e48e29f5cff912af15589a"} Nov 28 18:16:19 crc kubenswrapper[5024]: I1128 18:16:19.784097 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndjrg" event={"ID":"c8060a0e-3c7c-4827-91c4-681ed124ffa5","Type":"ContainerStarted","Data":"7a61c79f7aabdcd2b89858eecadf3df780b2b97a174540d5982e2aa56c79736a"} Nov 28 18:16:20 crc kubenswrapper[5024]: I1128 18:16:20.796086 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndjrg" event={"ID":"c8060a0e-3c7c-4827-91c4-681ed124ffa5","Type":"ContainerStarted","Data":"094aff01649a1d1c73ed50ea196e1f9969a036f63ff5130c986add8ca7e81941"} Nov 28 18:16:21 crc kubenswrapper[5024]: I1128 18:16:21.809113 5024 generic.go:334] "Generic (PLEG): container finished" podID="c8060a0e-3c7c-4827-91c4-681ed124ffa5" containerID="094aff01649a1d1c73ed50ea196e1f9969a036f63ff5130c986add8ca7e81941" exitCode=0 Nov 28 18:16:21 crc kubenswrapper[5024]: I1128 18:16:21.809169 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndjrg" event={"ID":"c8060a0e-3c7c-4827-91c4-681ed124ffa5","Type":"ContainerDied","Data":"094aff01649a1d1c73ed50ea196e1f9969a036f63ff5130c986add8ca7e81941"} Nov 28 18:16:24 crc kubenswrapper[5024]: I1128 18:16:24.844294 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndjrg" event={"ID":"c8060a0e-3c7c-4827-91c4-681ed124ffa5","Type":"ContainerStarted","Data":"ec00d09f83338dde11df14545851a157089ad0cf96cc4e7ee391da01bf84a18f"} Nov 28 18:16:24 crc kubenswrapper[5024]: I1128 18:16:24.875670 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ndjrg" podStartSLOduration=2.556067417 podStartE2EDuration="6.875650505s" podCreationTimestamp="2025-11-28 18:16:18 +0000 UTC" firstStartedPulling="2025-11-28 18:16:19.787488059 +0000 UTC m=+4681.836408964" lastFinishedPulling="2025-11-28 18:16:24.107071147 +0000 UTC m=+4686.155992052" observedRunningTime="2025-11-28 18:16:24.871715176 +0000 UTC m=+4686.920636091" watchObservedRunningTime="2025-11-28 18:16:24.875650505 +0000 UTC m=+4686.924571410" Nov 28 18:16:25 crc kubenswrapper[5024]: I1128 18:16:25.365751 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 28 18:16:26 crc kubenswrapper[5024]: I1128 18:16:26.871918 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"38ea9d2b-3972-4bda-9cdd-c341334be5d1","Type":"ContainerStarted","Data":"52e1c50b3865b7b1de80ca1ff53eb39c6b5738fc00f50caea2adf9e9ddb3a4f2"} Nov 28 18:16:26 crc kubenswrapper[5024]: I1128 18:16:26.904533 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.953079964 podStartE2EDuration="1m2.904515002s" podCreationTimestamp="2025-11-28 18:15:24 +0000 UTC" firstStartedPulling="2025-11-28 18:15:26.41036065 +0000 UTC m=+4628.459281555" lastFinishedPulling="2025-11-28 18:16:25.361795698 +0000 UTC m=+4687.410716593" observedRunningTime="2025-11-28 18:16:26.889968757 +0000 UTC m=+4688.938889662" watchObservedRunningTime="2025-11-28 18:16:26.904515002 +0000 UTC m=+4688.953435907" Nov 28 18:16:28 crc kubenswrapper[5024]: I1128 18:16:28.627734 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:28 crc kubenswrapper[5024]: I1128 18:16:28.628078 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:28 crc kubenswrapper[5024]: I1128 18:16:28.680185 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:39 crc kubenswrapper[5024]: I1128 18:16:39.034401 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:39 crc kubenswrapper[5024]: I1128 18:16:39.108395 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ndjrg"] Nov 28 18:16:40 crc kubenswrapper[5024]: I1128 18:16:40.043918 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ndjrg" podUID="c8060a0e-3c7c-4827-91c4-681ed124ffa5" containerName="registry-server" containerID="cri-o://ec00d09f83338dde11df14545851a157089ad0cf96cc4e7ee391da01bf84a18f" gracePeriod=2 Nov 28 18:16:41 crc kubenswrapper[5024]: I1128 18:16:41.058098 5024 generic.go:334] "Generic (PLEG): container finished" podID="c8060a0e-3c7c-4827-91c4-681ed124ffa5" containerID="ec00d09f83338dde11df14545851a157089ad0cf96cc4e7ee391da01bf84a18f" exitCode=0 Nov 28 18:16:41 crc kubenswrapper[5024]: I1128 18:16:41.058746 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndjrg" event={"ID":"c8060a0e-3c7c-4827-91c4-681ed124ffa5","Type":"ContainerDied","Data":"ec00d09f83338dde11df14545851a157089ad0cf96cc4e7ee391da01bf84a18f"} Nov 28 18:16:41 crc kubenswrapper[5024]: I1128 18:16:41.058799 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndjrg" event={"ID":"c8060a0e-3c7c-4827-91c4-681ed124ffa5","Type":"ContainerDied","Data":"7a61c79f7aabdcd2b89858eecadf3df780b2b97a174540d5982e2aa56c79736a"} Nov 28 18:16:41 crc kubenswrapper[5024]: I1128 18:16:41.058818 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a61c79f7aabdcd2b89858eecadf3df780b2b97a174540d5982e2aa56c79736a" Nov 28 18:16:41 crc kubenswrapper[5024]: I1128 18:16:41.119304 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:41 crc kubenswrapper[5024]: I1128 18:16:41.300413 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8060a0e-3c7c-4827-91c4-681ed124ffa5-utilities\") pod \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\" (UID: \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\") " Nov 28 18:16:41 crc kubenswrapper[5024]: I1128 18:16:41.300506 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8060a0e-3c7c-4827-91c4-681ed124ffa5-catalog-content\") pod \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\" (UID: \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\") " Nov 28 18:16:41 crc kubenswrapper[5024]: I1128 18:16:41.300558 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8csjz\" (UniqueName: \"kubernetes.io/projected/c8060a0e-3c7c-4827-91c4-681ed124ffa5-kube-api-access-8csjz\") pod \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\" (UID: \"c8060a0e-3c7c-4827-91c4-681ed124ffa5\") " Nov 28 18:16:41 crc kubenswrapper[5024]: I1128 18:16:41.301072 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8060a0e-3c7c-4827-91c4-681ed124ffa5-utilities" (OuterVolumeSpecName: "utilities") pod "c8060a0e-3c7c-4827-91c4-681ed124ffa5" (UID: "c8060a0e-3c7c-4827-91c4-681ed124ffa5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:16:41 crc kubenswrapper[5024]: I1128 18:16:41.301925 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8060a0e-3c7c-4827-91c4-681ed124ffa5-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:16:41 crc kubenswrapper[5024]: I1128 18:16:41.309825 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8060a0e-3c7c-4827-91c4-681ed124ffa5-kube-api-access-8csjz" (OuterVolumeSpecName: "kube-api-access-8csjz") pod "c8060a0e-3c7c-4827-91c4-681ed124ffa5" (UID: "c8060a0e-3c7c-4827-91c4-681ed124ffa5"). InnerVolumeSpecName "kube-api-access-8csjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:16:41 crc kubenswrapper[5024]: I1128 18:16:41.350862 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8060a0e-3c7c-4827-91c4-681ed124ffa5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8060a0e-3c7c-4827-91c4-681ed124ffa5" (UID: "c8060a0e-3c7c-4827-91c4-681ed124ffa5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:16:41 crc kubenswrapper[5024]: I1128 18:16:41.403895 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8csjz\" (UniqueName: \"kubernetes.io/projected/c8060a0e-3c7c-4827-91c4-681ed124ffa5-kube-api-access-8csjz\") on node \"crc\" DevicePath \"\"" Nov 28 18:16:41 crc kubenswrapper[5024]: I1128 18:16:41.403932 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8060a0e-3c7c-4827-91c4-681ed124ffa5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:16:42 crc kubenswrapper[5024]: I1128 18:16:42.066222 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndjrg" Nov 28 18:16:42 crc kubenswrapper[5024]: I1128 18:16:42.104614 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ndjrg"] Nov 28 18:16:42 crc kubenswrapper[5024]: I1128 18:16:42.114470 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ndjrg"] Nov 28 18:16:42 crc kubenswrapper[5024]: I1128 18:16:42.511149 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8060a0e-3c7c-4827-91c4-681ed124ffa5" path="/var/lib/kubelet/pods/c8060a0e-3c7c-4827-91c4-681ed124ffa5/volumes" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.038735 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gr7pw"] Nov 28 18:17:48 crc kubenswrapper[5024]: E1128 18:17:48.039948 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8060a0e-3c7c-4827-91c4-681ed124ffa5" containerName="extract-content" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.044604 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8060a0e-3c7c-4827-91c4-681ed124ffa5" containerName="extract-content" Nov 28 18:17:48 crc kubenswrapper[5024]: E1128 18:17:48.044674 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8060a0e-3c7c-4827-91c4-681ed124ffa5" containerName="registry-server" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.044686 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8060a0e-3c7c-4827-91c4-681ed124ffa5" containerName="registry-server" Nov 28 18:17:48 crc kubenswrapper[5024]: E1128 18:17:48.044780 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8060a0e-3c7c-4827-91c4-681ed124ffa5" containerName="extract-utilities" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.044789 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8060a0e-3c7c-4827-91c4-681ed124ffa5" containerName="extract-utilities" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.045403 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8060a0e-3c7c-4827-91c4-681ed124ffa5" containerName="registry-server" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.049909 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.107567 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gr7pw"] Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.149736 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bc718-21de-4145-8eb7-a41728be99a9-utilities\") pod \"redhat-operators-gr7pw\" (UID: \"350bc718-21de-4145-8eb7-a41728be99a9\") " pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.149949 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bbsc\" (UniqueName: \"kubernetes.io/projected/350bc718-21de-4145-8eb7-a41728be99a9-kube-api-access-6bbsc\") pod \"redhat-operators-gr7pw\" (UID: \"350bc718-21de-4145-8eb7-a41728be99a9\") " pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.150179 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bc718-21de-4145-8eb7-a41728be99a9-catalog-content\") pod \"redhat-operators-gr7pw\" (UID: \"350bc718-21de-4145-8eb7-a41728be99a9\") " pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.252655 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bc718-21de-4145-8eb7-a41728be99a9-utilities\") pod \"redhat-operators-gr7pw\" (UID: \"350bc718-21de-4145-8eb7-a41728be99a9\") " pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.252791 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbsc\" (UniqueName: \"kubernetes.io/projected/350bc718-21de-4145-8eb7-a41728be99a9-kube-api-access-6bbsc\") pod \"redhat-operators-gr7pw\" (UID: \"350bc718-21de-4145-8eb7-a41728be99a9\") " pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.252861 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bc718-21de-4145-8eb7-a41728be99a9-catalog-content\") pod \"redhat-operators-gr7pw\" (UID: \"350bc718-21de-4145-8eb7-a41728be99a9\") " pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.255910 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bc718-21de-4145-8eb7-a41728be99a9-utilities\") pod \"redhat-operators-gr7pw\" (UID: \"350bc718-21de-4145-8eb7-a41728be99a9\") " pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.261624 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bc718-21de-4145-8eb7-a41728be99a9-catalog-content\") pod \"redhat-operators-gr7pw\" (UID: \"350bc718-21de-4145-8eb7-a41728be99a9\") " pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.281469 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bbsc\" (UniqueName: \"kubernetes.io/projected/350bc718-21de-4145-8eb7-a41728be99a9-kube-api-access-6bbsc\") pod \"redhat-operators-gr7pw\" (UID: \"350bc718-21de-4145-8eb7-a41728be99a9\") " pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:17:48 crc kubenswrapper[5024]: I1128 18:17:48.397681 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:17:49 crc kubenswrapper[5024]: I1128 18:17:49.291465 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gr7pw"] Nov 28 18:17:49 crc kubenswrapper[5024]: W1128 18:17:49.339756 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod350bc718_21de_4145_8eb7_a41728be99a9.slice/crio-463cf7462d85223d304f48955a33bf649004f38c23f6d8cf2475e8a9ef64f64c WatchSource:0}: Error finding container 463cf7462d85223d304f48955a33bf649004f38c23f6d8cf2475e8a9ef64f64c: Status 404 returned error can't find the container with id 463cf7462d85223d304f48955a33bf649004f38c23f6d8cf2475e8a9ef64f64c Nov 28 18:17:49 crc kubenswrapper[5024]: I1128 18:17:49.958093 5024 generic.go:334] "Generic (PLEG): container finished" podID="350bc718-21de-4145-8eb7-a41728be99a9" containerID="e26076bdfd4e9b61fd5b6f34c2a3fde09109dc72ad99e24c8b38bcd06bccccc6" exitCode=0 Nov 28 18:17:49 crc kubenswrapper[5024]: I1128 18:17:49.958226 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr7pw" event={"ID":"350bc718-21de-4145-8eb7-a41728be99a9","Type":"ContainerDied","Data":"e26076bdfd4e9b61fd5b6f34c2a3fde09109dc72ad99e24c8b38bcd06bccccc6"} Nov 28 18:17:49 crc kubenswrapper[5024]: I1128 18:17:49.958422 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr7pw" event={"ID":"350bc718-21de-4145-8eb7-a41728be99a9","Type":"ContainerStarted","Data":"463cf7462d85223d304f48955a33bf649004f38c23f6d8cf2475e8a9ef64f64c"} Nov 28 18:17:51 crc kubenswrapper[5024]: I1128 18:17:51.982846 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr7pw" event={"ID":"350bc718-21de-4145-8eb7-a41728be99a9","Type":"ContainerStarted","Data":"6b7891ce37f6087e256cd37c0297dc6388f014d09049bc07d4a16c9ba7bb8973"} Nov 28 18:17:56 crc kubenswrapper[5024]: I1128 18:17:56.085030 5024 generic.go:334] "Generic (PLEG): container finished" podID="350bc718-21de-4145-8eb7-a41728be99a9" containerID="6b7891ce37f6087e256cd37c0297dc6388f014d09049bc07d4a16c9ba7bb8973" exitCode=0 Nov 28 18:17:56 crc kubenswrapper[5024]: I1128 18:17:56.085076 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr7pw" event={"ID":"350bc718-21de-4145-8eb7-a41728be99a9","Type":"ContainerDied","Data":"6b7891ce37f6087e256cd37c0297dc6388f014d09049bc07d4a16c9ba7bb8973"} Nov 28 18:17:57 crc kubenswrapper[5024]: I1128 18:17:57.102180 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr7pw" event={"ID":"350bc718-21de-4145-8eb7-a41728be99a9","Type":"ContainerStarted","Data":"51231a28e1046740f5aa4b06a8b97add825e12fbcb0e2eb0efdfc5facb77fc14"} Nov 28 18:17:57 crc kubenswrapper[5024]: I1128 18:17:57.174192 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gr7pw" podStartSLOduration=2.438111967 podStartE2EDuration="9.173801448s" podCreationTimestamp="2025-11-28 18:17:48 +0000 UTC" firstStartedPulling="2025-11-28 18:17:49.961239753 +0000 UTC m=+4772.010160658" lastFinishedPulling="2025-11-28 18:17:56.696929234 +0000 UTC m=+4778.745850139" observedRunningTime="2025-11-28 18:17:57.141203489 +0000 UTC m=+4779.190124394" watchObservedRunningTime="2025-11-28 18:17:57.173801448 +0000 UTC m=+4779.222722363" Nov 28 18:17:58 crc kubenswrapper[5024]: I1128 18:17:58.398409 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:17:58 crc kubenswrapper[5024]: I1128 18:17:58.398463 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:17:59 crc kubenswrapper[5024]: I1128 18:17:59.612843 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gr7pw" podUID="350bc718-21de-4145-8eb7-a41728be99a9" containerName="registry-server" probeResult="failure" output=< Nov 28 18:17:59 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 18:17:59 crc kubenswrapper[5024]: > Nov 28 18:18:09 crc kubenswrapper[5024]: I1128 18:18:09.454686 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gr7pw" podUID="350bc718-21de-4145-8eb7-a41728be99a9" containerName="registry-server" probeResult="failure" output=< Nov 28 18:18:09 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 18:18:09 crc kubenswrapper[5024]: > Nov 28 18:18:18 crc kubenswrapper[5024]: I1128 18:18:18.515462 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:18:18 crc kubenswrapper[5024]: I1128 18:18:18.584619 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:18:18 crc kubenswrapper[5024]: I1128 18:18:18.731194 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gr7pw"] Nov 28 18:18:20 crc kubenswrapper[5024]: I1128 18:18:20.501685 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gr7pw" podUID="350bc718-21de-4145-8eb7-a41728be99a9" containerName="registry-server" containerID="cri-o://51231a28e1046740f5aa4b06a8b97add825e12fbcb0e2eb0efdfc5facb77fc14" gracePeriod=2 Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.387587 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.513785 5024 generic.go:334] "Generic (PLEG): container finished" podID="350bc718-21de-4145-8eb7-a41728be99a9" containerID="51231a28e1046740f5aa4b06a8b97add825e12fbcb0e2eb0efdfc5facb77fc14" exitCode=0 Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.513831 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr7pw" event={"ID":"350bc718-21de-4145-8eb7-a41728be99a9","Type":"ContainerDied","Data":"51231a28e1046740f5aa4b06a8b97add825e12fbcb0e2eb0efdfc5facb77fc14"} Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.513881 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr7pw" event={"ID":"350bc718-21de-4145-8eb7-a41728be99a9","Type":"ContainerDied","Data":"463cf7462d85223d304f48955a33bf649004f38c23f6d8cf2475e8a9ef64f64c"} Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.514101 5024 scope.go:117] "RemoveContainer" containerID="51231a28e1046740f5aa4b06a8b97add825e12fbcb0e2eb0efdfc5facb77fc14" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.514312 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr7pw" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.558563 5024 scope.go:117] "RemoveContainer" containerID="6b7891ce37f6087e256cd37c0297dc6388f014d09049bc07d4a16c9ba7bb8973" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.563865 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bc718-21de-4145-8eb7-a41728be99a9-utilities\") pod \"350bc718-21de-4145-8eb7-a41728be99a9\" (UID: \"350bc718-21de-4145-8eb7-a41728be99a9\") " Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.564002 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bc718-21de-4145-8eb7-a41728be99a9-catalog-content\") pod \"350bc718-21de-4145-8eb7-a41728be99a9\" (UID: \"350bc718-21de-4145-8eb7-a41728be99a9\") " Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.564105 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bbsc\" (UniqueName: \"kubernetes.io/projected/350bc718-21de-4145-8eb7-a41728be99a9-kube-api-access-6bbsc\") pod \"350bc718-21de-4145-8eb7-a41728be99a9\" (UID: \"350bc718-21de-4145-8eb7-a41728be99a9\") " Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.565437 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/350bc718-21de-4145-8eb7-a41728be99a9-utilities" (OuterVolumeSpecName: "utilities") pod "350bc718-21de-4145-8eb7-a41728be99a9" (UID: "350bc718-21de-4145-8eb7-a41728be99a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.585475 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350bc718-21de-4145-8eb7-a41728be99a9-kube-api-access-6bbsc" (OuterVolumeSpecName: "kube-api-access-6bbsc") pod "350bc718-21de-4145-8eb7-a41728be99a9" (UID: "350bc718-21de-4145-8eb7-a41728be99a9"). InnerVolumeSpecName "kube-api-access-6bbsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.587131 5024 scope.go:117] "RemoveContainer" containerID="e26076bdfd4e9b61fd5b6f34c2a3fde09109dc72ad99e24c8b38bcd06bccccc6" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.666309 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/350bc718-21de-4145-8eb7-a41728be99a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "350bc718-21de-4145-8eb7-a41728be99a9" (UID: "350bc718-21de-4145-8eb7-a41728be99a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.668859 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/350bc718-21de-4145-8eb7-a41728be99a9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.668898 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bbsc\" (UniqueName: \"kubernetes.io/projected/350bc718-21de-4145-8eb7-a41728be99a9-kube-api-access-6bbsc\") on node \"crc\" DevicePath \"\"" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.668908 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/350bc718-21de-4145-8eb7-a41728be99a9-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.712585 5024 scope.go:117] "RemoveContainer" containerID="51231a28e1046740f5aa4b06a8b97add825e12fbcb0e2eb0efdfc5facb77fc14" Nov 28 18:18:21 crc kubenswrapper[5024]: E1128 18:18:21.714031 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51231a28e1046740f5aa4b06a8b97add825e12fbcb0e2eb0efdfc5facb77fc14\": container with ID starting with 51231a28e1046740f5aa4b06a8b97add825e12fbcb0e2eb0efdfc5facb77fc14 not found: ID does not exist" containerID="51231a28e1046740f5aa4b06a8b97add825e12fbcb0e2eb0efdfc5facb77fc14" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.714318 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51231a28e1046740f5aa4b06a8b97add825e12fbcb0e2eb0efdfc5facb77fc14"} err="failed to get container status \"51231a28e1046740f5aa4b06a8b97add825e12fbcb0e2eb0efdfc5facb77fc14\": rpc error: code = NotFound desc = could not find container \"51231a28e1046740f5aa4b06a8b97add825e12fbcb0e2eb0efdfc5facb77fc14\": container with ID starting with 51231a28e1046740f5aa4b06a8b97add825e12fbcb0e2eb0efdfc5facb77fc14 not found: ID does not exist" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.714364 5024 scope.go:117] "RemoveContainer" containerID="6b7891ce37f6087e256cd37c0297dc6388f014d09049bc07d4a16c9ba7bb8973" Nov 28 18:18:21 crc kubenswrapper[5024]: E1128 18:18:21.720934 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b7891ce37f6087e256cd37c0297dc6388f014d09049bc07d4a16c9ba7bb8973\": container with ID starting with 6b7891ce37f6087e256cd37c0297dc6388f014d09049bc07d4a16c9ba7bb8973 not found: ID does not exist" containerID="6b7891ce37f6087e256cd37c0297dc6388f014d09049bc07d4a16c9ba7bb8973" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.720981 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b7891ce37f6087e256cd37c0297dc6388f014d09049bc07d4a16c9ba7bb8973"} err="failed to get container status \"6b7891ce37f6087e256cd37c0297dc6388f014d09049bc07d4a16c9ba7bb8973\": rpc error: code = NotFound desc = could not find container \"6b7891ce37f6087e256cd37c0297dc6388f014d09049bc07d4a16c9ba7bb8973\": container with ID starting with 6b7891ce37f6087e256cd37c0297dc6388f014d09049bc07d4a16c9ba7bb8973 not found: ID does not exist" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.721004 5024 scope.go:117] "RemoveContainer" containerID="e26076bdfd4e9b61fd5b6f34c2a3fde09109dc72ad99e24c8b38bcd06bccccc6" Nov 28 18:18:21 crc kubenswrapper[5024]: E1128 18:18:21.721420 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e26076bdfd4e9b61fd5b6f34c2a3fde09109dc72ad99e24c8b38bcd06bccccc6\": container with ID starting with e26076bdfd4e9b61fd5b6f34c2a3fde09109dc72ad99e24c8b38bcd06bccccc6 not found: ID does not exist" containerID="e26076bdfd4e9b61fd5b6f34c2a3fde09109dc72ad99e24c8b38bcd06bccccc6" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.721476 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e26076bdfd4e9b61fd5b6f34c2a3fde09109dc72ad99e24c8b38bcd06bccccc6"} err="failed to get container status \"e26076bdfd4e9b61fd5b6f34c2a3fde09109dc72ad99e24c8b38bcd06bccccc6\": rpc error: code = NotFound desc = could not find container \"e26076bdfd4e9b61fd5b6f34c2a3fde09109dc72ad99e24c8b38bcd06bccccc6\": container with ID starting with e26076bdfd4e9b61fd5b6f34c2a3fde09109dc72ad99e24c8b38bcd06bccccc6 not found: ID does not exist" Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.855827 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gr7pw"] Nov 28 18:18:21 crc kubenswrapper[5024]: I1128 18:18:21.866411 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gr7pw"] Nov 28 18:18:22 crc kubenswrapper[5024]: I1128 18:18:22.513864 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="350bc718-21de-4145-8eb7-a41728be99a9" path="/var/lib/kubelet/pods/350bc718-21de-4145-8eb7-a41728be99a9/volumes" Nov 28 18:18:37 crc kubenswrapper[5024]: I1128 18:18:37.566624 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:18:37 crc kubenswrapper[5024]: I1128 18:18:37.567281 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:18:45 crc kubenswrapper[5024]: I1128 18:18:45.915942 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mbb92"] Nov 28 18:18:45 crc kubenswrapper[5024]: E1128 18:18:45.917779 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350bc718-21de-4145-8eb7-a41728be99a9" containerName="registry-server" Nov 28 18:18:45 crc kubenswrapper[5024]: I1128 18:18:45.917802 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="350bc718-21de-4145-8eb7-a41728be99a9" containerName="registry-server" Nov 28 18:18:45 crc kubenswrapper[5024]: E1128 18:18:45.917914 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350bc718-21de-4145-8eb7-a41728be99a9" containerName="extract-utilities" Nov 28 18:18:45 crc kubenswrapper[5024]: I1128 18:18:45.917933 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="350bc718-21de-4145-8eb7-a41728be99a9" containerName="extract-utilities" Nov 28 18:18:45 crc kubenswrapper[5024]: E1128 18:18:45.917978 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350bc718-21de-4145-8eb7-a41728be99a9" containerName="extract-content" Nov 28 18:18:45 crc kubenswrapper[5024]: I1128 18:18:45.917984 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="350bc718-21de-4145-8eb7-a41728be99a9" containerName="extract-content" Nov 28 18:18:45 crc kubenswrapper[5024]: I1128 18:18:45.918740 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="350bc718-21de-4145-8eb7-a41728be99a9" containerName="registry-server" Nov 28 18:18:45 crc kubenswrapper[5024]: I1128 18:18:45.921508 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:46 crc kubenswrapper[5024]: I1128 18:18:46.008697 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbb92"] Nov 28 18:18:46 crc kubenswrapper[5024]: I1128 18:18:46.113842 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f7a326a-04e4-4260-b199-6f406e55624c-utilities\") pod \"redhat-marketplace-mbb92\" (UID: \"0f7a326a-04e4-4260-b199-6f406e55624c\") " pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:46 crc kubenswrapper[5024]: I1128 18:18:46.113918 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5drq\" (UniqueName: \"kubernetes.io/projected/0f7a326a-04e4-4260-b199-6f406e55624c-kube-api-access-s5drq\") pod \"redhat-marketplace-mbb92\" (UID: \"0f7a326a-04e4-4260-b199-6f406e55624c\") " pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:46 crc kubenswrapper[5024]: I1128 18:18:46.113976 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f7a326a-04e4-4260-b199-6f406e55624c-catalog-content\") pod \"redhat-marketplace-mbb92\" (UID: \"0f7a326a-04e4-4260-b199-6f406e55624c\") " pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:46 crc kubenswrapper[5024]: I1128 18:18:46.216384 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f7a326a-04e4-4260-b199-6f406e55624c-utilities\") pod \"redhat-marketplace-mbb92\" (UID: \"0f7a326a-04e4-4260-b199-6f406e55624c\") " pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:46 crc kubenswrapper[5024]: I1128 18:18:46.216445 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5drq\" (UniqueName: \"kubernetes.io/projected/0f7a326a-04e4-4260-b199-6f406e55624c-kube-api-access-s5drq\") pod \"redhat-marketplace-mbb92\" (UID: \"0f7a326a-04e4-4260-b199-6f406e55624c\") " pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:46 crc kubenswrapper[5024]: I1128 18:18:46.216481 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f7a326a-04e4-4260-b199-6f406e55624c-catalog-content\") pod \"redhat-marketplace-mbb92\" (UID: \"0f7a326a-04e4-4260-b199-6f406e55624c\") " pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:46 crc kubenswrapper[5024]: I1128 18:18:46.229069 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f7a326a-04e4-4260-b199-6f406e55624c-catalog-content\") pod \"redhat-marketplace-mbb92\" (UID: \"0f7a326a-04e4-4260-b199-6f406e55624c\") " pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:46 crc kubenswrapper[5024]: I1128 18:18:46.229197 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f7a326a-04e4-4260-b199-6f406e55624c-utilities\") pod \"redhat-marketplace-mbb92\" (UID: \"0f7a326a-04e4-4260-b199-6f406e55624c\") " pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:46 crc kubenswrapper[5024]: I1128 18:18:46.487792 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5drq\" (UniqueName: \"kubernetes.io/projected/0f7a326a-04e4-4260-b199-6f406e55624c-kube-api-access-s5drq\") pod \"redhat-marketplace-mbb92\" (UID: \"0f7a326a-04e4-4260-b199-6f406e55624c\") " pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:46 crc kubenswrapper[5024]: I1128 18:18:46.546767 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:47 crc kubenswrapper[5024]: I1128 18:18:47.731542 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbb92"] Nov 28 18:18:47 crc kubenswrapper[5024]: W1128 18:18:47.737895 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f7a326a_04e4_4260_b199_6f406e55624c.slice/crio-82c6d6ab8c2015345d8f98186ef1b5eb13d9444a91c92539d83de0042dcb68ab WatchSource:0}: Error finding container 82c6d6ab8c2015345d8f98186ef1b5eb13d9444a91c92539d83de0042dcb68ab: Status 404 returned error can't find the container with id 82c6d6ab8c2015345d8f98186ef1b5eb13d9444a91c92539d83de0042dcb68ab Nov 28 18:18:47 crc kubenswrapper[5024]: I1128 18:18:47.870425 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbb92" event={"ID":"0f7a326a-04e4-4260-b199-6f406e55624c","Type":"ContainerStarted","Data":"82c6d6ab8c2015345d8f98186ef1b5eb13d9444a91c92539d83de0042dcb68ab"} Nov 28 18:18:48 crc kubenswrapper[5024]: I1128 18:18:48.884586 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbb92" event={"ID":"0f7a326a-04e4-4260-b199-6f406e55624c","Type":"ContainerDied","Data":"7f913cc1ef1610fe84b6b2a0058e88a15c8410a402262b79f6feb200aae1d314"} Nov 28 18:18:48 crc kubenswrapper[5024]: I1128 18:18:48.912409 5024 generic.go:334] "Generic (PLEG): container finished" podID="0f7a326a-04e4-4260-b199-6f406e55624c" containerID="7f913cc1ef1610fe84b6b2a0058e88a15c8410a402262b79f6feb200aae1d314" exitCode=0 Nov 28 18:18:51 crc kubenswrapper[5024]: I1128 18:18:51.110464 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbb92" event={"ID":"0f7a326a-04e4-4260-b199-6f406e55624c","Type":"ContainerStarted","Data":"883012f0fdf64047db33c46bb29315cb044344cb561733af784d637e06e4e1d6"} Nov 28 18:18:52 crc kubenswrapper[5024]: I1128 18:18:52.124178 5024 generic.go:334] "Generic (PLEG): container finished" podID="0f7a326a-04e4-4260-b199-6f406e55624c" containerID="883012f0fdf64047db33c46bb29315cb044344cb561733af784d637e06e4e1d6" exitCode=0 Nov 28 18:18:52 crc kubenswrapper[5024]: I1128 18:18:52.124267 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbb92" event={"ID":"0f7a326a-04e4-4260-b199-6f406e55624c","Type":"ContainerDied","Data":"883012f0fdf64047db33c46bb29315cb044344cb561733af784d637e06e4e1d6"} Nov 28 18:18:53 crc kubenswrapper[5024]: I1128 18:18:53.152726 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbb92" event={"ID":"0f7a326a-04e4-4260-b199-6f406e55624c","Type":"ContainerStarted","Data":"2963d271591789435693b27cd70104126eb90e456af8a7a791355ad164156b43"} Nov 28 18:18:53 crc kubenswrapper[5024]: I1128 18:18:53.378178 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mbb92" podStartSLOduration=4.551040512 podStartE2EDuration="8.36365789s" podCreationTimestamp="2025-11-28 18:18:45 +0000 UTC" firstStartedPulling="2025-11-28 18:18:48.886828644 +0000 UTC m=+4830.935749549" lastFinishedPulling="2025-11-28 18:18:52.699446022 +0000 UTC m=+4834.748366927" observedRunningTime="2025-11-28 18:18:53.351534964 +0000 UTC m=+4835.400455889" watchObservedRunningTime="2025-11-28 18:18:53.36365789 +0000 UTC m=+4835.412578795" Nov 28 18:18:56 crc kubenswrapper[5024]: I1128 18:18:56.547873 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:56 crc kubenswrapper[5024]: I1128 18:18:56.548580 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:56 crc kubenswrapper[5024]: I1128 18:18:56.620766 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:57 crc kubenswrapper[5024]: I1128 18:18:57.264113 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:18:57 crc kubenswrapper[5024]: I1128 18:18:57.350686 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbb92"] Nov 28 18:18:59 crc kubenswrapper[5024]: I1128 18:18:59.219279 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mbb92" podUID="0f7a326a-04e4-4260-b199-6f406e55624c" containerName="registry-server" containerID="cri-o://2963d271591789435693b27cd70104126eb90e456af8a7a791355ad164156b43" gracePeriod=2 Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.160475 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.188890 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f7a326a-04e4-4260-b199-6f406e55624c-catalog-content\") pod \"0f7a326a-04e4-4260-b199-6f406e55624c\" (UID: \"0f7a326a-04e4-4260-b199-6f406e55624c\") " Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.189135 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f7a326a-04e4-4260-b199-6f406e55624c-utilities\") pod \"0f7a326a-04e4-4260-b199-6f406e55624c\" (UID: \"0f7a326a-04e4-4260-b199-6f406e55624c\") " Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.189262 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5drq\" (UniqueName: \"kubernetes.io/projected/0f7a326a-04e4-4260-b199-6f406e55624c-kube-api-access-s5drq\") pod \"0f7a326a-04e4-4260-b199-6f406e55624c\" (UID: \"0f7a326a-04e4-4260-b199-6f406e55624c\") " Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.194148 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f7a326a-04e4-4260-b199-6f406e55624c-utilities" (OuterVolumeSpecName: "utilities") pod "0f7a326a-04e4-4260-b199-6f406e55624c" (UID: "0f7a326a-04e4-4260-b199-6f406e55624c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.206531 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f7a326a-04e4-4260-b199-6f406e55624c-kube-api-access-s5drq" (OuterVolumeSpecName: "kube-api-access-s5drq") pod "0f7a326a-04e4-4260-b199-6f406e55624c" (UID: "0f7a326a-04e4-4260-b199-6f406e55624c"). InnerVolumeSpecName "kube-api-access-s5drq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.224422 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f7a326a-04e4-4260-b199-6f406e55624c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f7a326a-04e4-4260-b199-6f406e55624c" (UID: "0f7a326a-04e4-4260-b199-6f406e55624c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.249151 5024 generic.go:334] "Generic (PLEG): container finished" podID="0f7a326a-04e4-4260-b199-6f406e55624c" containerID="2963d271591789435693b27cd70104126eb90e456af8a7a791355ad164156b43" exitCode=0 Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.249302 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbb92" event={"ID":"0f7a326a-04e4-4260-b199-6f406e55624c","Type":"ContainerDied","Data":"2963d271591789435693b27cd70104126eb90e456af8a7a791355ad164156b43"} Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.249338 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbb92" event={"ID":"0f7a326a-04e4-4260-b199-6f406e55624c","Type":"ContainerDied","Data":"82c6d6ab8c2015345d8f98186ef1b5eb13d9444a91c92539d83de0042dcb68ab"} Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.249707 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbb92" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.250530 5024 scope.go:117] "RemoveContainer" containerID="2963d271591789435693b27cd70104126eb90e456af8a7a791355ad164156b43" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.292824 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f7a326a-04e4-4260-b199-6f406e55624c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.292857 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f7a326a-04e4-4260-b199-6f406e55624c-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.292868 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5drq\" (UniqueName: \"kubernetes.io/projected/0f7a326a-04e4-4260-b199-6f406e55624c-kube-api-access-s5drq\") on node \"crc\" DevicePath \"\"" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.300743 5024 scope.go:117] "RemoveContainer" containerID="883012f0fdf64047db33c46bb29315cb044344cb561733af784d637e06e4e1d6" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.335921 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbb92"] Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.347088 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbb92"] Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.365583 5024 scope.go:117] "RemoveContainer" containerID="7f913cc1ef1610fe84b6b2a0058e88a15c8410a402262b79f6feb200aae1d314" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.424375 5024 scope.go:117] "RemoveContainer" containerID="2963d271591789435693b27cd70104126eb90e456af8a7a791355ad164156b43" Nov 28 18:19:00 crc kubenswrapper[5024]: E1128 18:19:00.429087 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2963d271591789435693b27cd70104126eb90e456af8a7a791355ad164156b43\": container with ID starting with 2963d271591789435693b27cd70104126eb90e456af8a7a791355ad164156b43 not found: ID does not exist" containerID="2963d271591789435693b27cd70104126eb90e456af8a7a791355ad164156b43" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.429355 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2963d271591789435693b27cd70104126eb90e456af8a7a791355ad164156b43"} err="failed to get container status \"2963d271591789435693b27cd70104126eb90e456af8a7a791355ad164156b43\": rpc error: code = NotFound desc = could not find container \"2963d271591789435693b27cd70104126eb90e456af8a7a791355ad164156b43\": container with ID starting with 2963d271591789435693b27cd70104126eb90e456af8a7a791355ad164156b43 not found: ID does not exist" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.429399 5024 scope.go:117] "RemoveContainer" containerID="883012f0fdf64047db33c46bb29315cb044344cb561733af784d637e06e4e1d6" Nov 28 18:19:00 crc kubenswrapper[5024]: E1128 18:19:00.429851 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"883012f0fdf64047db33c46bb29315cb044344cb561733af784d637e06e4e1d6\": container with ID starting with 883012f0fdf64047db33c46bb29315cb044344cb561733af784d637e06e4e1d6 not found: ID does not exist" containerID="883012f0fdf64047db33c46bb29315cb044344cb561733af784d637e06e4e1d6" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.429896 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"883012f0fdf64047db33c46bb29315cb044344cb561733af784d637e06e4e1d6"} err="failed to get container status \"883012f0fdf64047db33c46bb29315cb044344cb561733af784d637e06e4e1d6\": rpc error: code = NotFound desc = could not find container \"883012f0fdf64047db33c46bb29315cb044344cb561733af784d637e06e4e1d6\": container with ID starting with 883012f0fdf64047db33c46bb29315cb044344cb561733af784d637e06e4e1d6 not found: ID does not exist" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.429923 5024 scope.go:117] "RemoveContainer" containerID="7f913cc1ef1610fe84b6b2a0058e88a15c8410a402262b79f6feb200aae1d314" Nov 28 18:19:00 crc kubenswrapper[5024]: E1128 18:19:00.430338 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f913cc1ef1610fe84b6b2a0058e88a15c8410a402262b79f6feb200aae1d314\": container with ID starting with 7f913cc1ef1610fe84b6b2a0058e88a15c8410a402262b79f6feb200aae1d314 not found: ID does not exist" containerID="7f913cc1ef1610fe84b6b2a0058e88a15c8410a402262b79f6feb200aae1d314" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.430373 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f913cc1ef1610fe84b6b2a0058e88a15c8410a402262b79f6feb200aae1d314"} err="failed to get container status \"7f913cc1ef1610fe84b6b2a0058e88a15c8410a402262b79f6feb200aae1d314\": rpc error: code = NotFound desc = could not find container \"7f913cc1ef1610fe84b6b2a0058e88a15c8410a402262b79f6feb200aae1d314\": container with ID starting with 7f913cc1ef1610fe84b6b2a0058e88a15c8410a402262b79f6feb200aae1d314 not found: ID does not exist" Nov 28 18:19:00 crc kubenswrapper[5024]: I1128 18:19:00.511708 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f7a326a-04e4-4260-b199-6f406e55624c" path="/var/lib/kubelet/pods/0f7a326a-04e4-4260-b199-6f406e55624c/volumes" Nov 28 18:19:07 crc kubenswrapper[5024]: I1128 18:19:07.565174 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:19:07 crc kubenswrapper[5024]: I1128 18:19:07.565781 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:19:37 crc kubenswrapper[5024]: I1128 18:19:37.564853 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:19:37 crc kubenswrapper[5024]: I1128 18:19:37.566363 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:19:37 crc kubenswrapper[5024]: I1128 18:19:37.566429 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 18:19:37 crc kubenswrapper[5024]: I1128 18:19:37.567513 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d06361962fe509995d04df9a9542446ec780fadff703acb27501511c9c538a1c"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 18:19:37 crc kubenswrapper[5024]: I1128 18:19:37.567572 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://d06361962fe509995d04df9a9542446ec780fadff703acb27501511c9c538a1c" gracePeriod=600 Nov 28 18:19:38 crc kubenswrapper[5024]: I1128 18:19:38.773922 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="d06361962fe509995d04df9a9542446ec780fadff703acb27501511c9c538a1c" exitCode=0 Nov 28 18:19:38 crc kubenswrapper[5024]: I1128 18:19:38.773977 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"d06361962fe509995d04df9a9542446ec780fadff703acb27501511c9c538a1c"} Nov 28 18:19:38 crc kubenswrapper[5024]: I1128 18:19:38.774507 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7"} Nov 28 18:19:38 crc kubenswrapper[5024]: I1128 18:19:38.774538 5024 scope.go:117] "RemoveContainer" containerID="f4b67f541479d1147f2ecfdfde8ce04f913c6e4948ebb02558214ea0bc45fb74" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.011451 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gb2dc"] Nov 28 18:19:50 crc kubenswrapper[5024]: E1128 18:19:50.013214 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f7a326a-04e4-4260-b199-6f406e55624c" containerName="extract-utilities" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.013236 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f7a326a-04e4-4260-b199-6f406e55624c" containerName="extract-utilities" Nov 28 18:19:50 crc kubenswrapper[5024]: E1128 18:19:50.013275 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f7a326a-04e4-4260-b199-6f406e55624c" containerName="extract-content" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.013283 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f7a326a-04e4-4260-b199-6f406e55624c" containerName="extract-content" Nov 28 18:19:50 crc kubenswrapper[5024]: E1128 18:19:50.013331 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f7a326a-04e4-4260-b199-6f406e55624c" containerName="registry-server" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.013341 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f7a326a-04e4-4260-b199-6f406e55624c" containerName="registry-server" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.013622 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f7a326a-04e4-4260-b199-6f406e55624c" containerName="registry-server" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.017840 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.137159 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gb2dc"] Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.179237 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2492edef-f4c9-409b-ae65-29c14c071833-catalog-content\") pod \"community-operators-gb2dc\" (UID: \"2492edef-f4c9-409b-ae65-29c14c071833\") " pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.179539 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2492edef-f4c9-409b-ae65-29c14c071833-utilities\") pod \"community-operators-gb2dc\" (UID: \"2492edef-f4c9-409b-ae65-29c14c071833\") " pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.179840 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcmf8\" (UniqueName: \"kubernetes.io/projected/2492edef-f4c9-409b-ae65-29c14c071833-kube-api-access-tcmf8\") pod \"community-operators-gb2dc\" (UID: \"2492edef-f4c9-409b-ae65-29c14c071833\") " pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.282302 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2492edef-f4c9-409b-ae65-29c14c071833-catalog-content\") pod \"community-operators-gb2dc\" (UID: \"2492edef-f4c9-409b-ae65-29c14c071833\") " pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.282401 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2492edef-f4c9-409b-ae65-29c14c071833-utilities\") pod \"community-operators-gb2dc\" (UID: \"2492edef-f4c9-409b-ae65-29c14c071833\") " pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.282479 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcmf8\" (UniqueName: \"kubernetes.io/projected/2492edef-f4c9-409b-ae65-29c14c071833-kube-api-access-tcmf8\") pod \"community-operators-gb2dc\" (UID: \"2492edef-f4c9-409b-ae65-29c14c071833\") " pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.284205 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2492edef-f4c9-409b-ae65-29c14c071833-catalog-content\") pod \"community-operators-gb2dc\" (UID: \"2492edef-f4c9-409b-ae65-29c14c071833\") " pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.284580 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2492edef-f4c9-409b-ae65-29c14c071833-utilities\") pod \"community-operators-gb2dc\" (UID: \"2492edef-f4c9-409b-ae65-29c14c071833\") " pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.308174 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcmf8\" (UniqueName: \"kubernetes.io/projected/2492edef-f4c9-409b-ae65-29c14c071833-kube-api-access-tcmf8\") pod \"community-operators-gb2dc\" (UID: \"2492edef-f4c9-409b-ae65-29c14c071833\") " pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:19:50 crc kubenswrapper[5024]: I1128 18:19:50.340333 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:19:51 crc kubenswrapper[5024]: I1128 18:19:51.120534 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gb2dc"] Nov 28 18:19:52 crc kubenswrapper[5024]: I1128 18:19:52.084522 5024 generic.go:334] "Generic (PLEG): container finished" podID="2492edef-f4c9-409b-ae65-29c14c071833" containerID="a84044729f6fbca46283556a0dfce0def3b731b6f15cb750d068ca1dccc8e20b" exitCode=0 Nov 28 18:19:52 crc kubenswrapper[5024]: I1128 18:19:52.085029 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gb2dc" event={"ID":"2492edef-f4c9-409b-ae65-29c14c071833","Type":"ContainerDied","Data":"a84044729f6fbca46283556a0dfce0def3b731b6f15cb750d068ca1dccc8e20b"} Nov 28 18:19:52 crc kubenswrapper[5024]: I1128 18:19:52.085068 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gb2dc" event={"ID":"2492edef-f4c9-409b-ae65-29c14c071833","Type":"ContainerStarted","Data":"6433d1d287f38c5ea28a03adde17fe6081740c6227f096fd46787a1682e4201a"} Nov 28 18:19:54 crc kubenswrapper[5024]: I1128 18:19:54.116710 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gb2dc" event={"ID":"2492edef-f4c9-409b-ae65-29c14c071833","Type":"ContainerStarted","Data":"332f2828cfb00c2bbe55749e0348cee1bfc3faa4130a346c8e8be578e141e8c4"} Nov 28 18:19:55 crc kubenswrapper[5024]: I1128 18:19:55.136086 5024 generic.go:334] "Generic (PLEG): container finished" podID="2492edef-f4c9-409b-ae65-29c14c071833" containerID="332f2828cfb00c2bbe55749e0348cee1bfc3faa4130a346c8e8be578e141e8c4" exitCode=0 Nov 28 18:19:55 crc kubenswrapper[5024]: I1128 18:19:55.136418 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gb2dc" event={"ID":"2492edef-f4c9-409b-ae65-29c14c071833","Type":"ContainerDied","Data":"332f2828cfb00c2bbe55749e0348cee1bfc3faa4130a346c8e8be578e141e8c4"} Nov 28 18:19:56 crc kubenswrapper[5024]: I1128 18:19:56.150830 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gb2dc" event={"ID":"2492edef-f4c9-409b-ae65-29c14c071833","Type":"ContainerStarted","Data":"4e5d4572b1255f9597ae13cadc306686c5aab0b12862e9b4261043f7fe4fea9d"} Nov 28 18:19:56 crc kubenswrapper[5024]: I1128 18:19:56.178025 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gb2dc" podStartSLOduration=3.6313498539999998 podStartE2EDuration="7.177546748s" podCreationTimestamp="2025-11-28 18:19:49 +0000 UTC" firstStartedPulling="2025-11-28 18:19:52.088387587 +0000 UTC m=+4894.137308492" lastFinishedPulling="2025-11-28 18:19:55.634584481 +0000 UTC m=+4897.683505386" observedRunningTime="2025-11-28 18:19:56.168740837 +0000 UTC m=+4898.217661752" watchObservedRunningTime="2025-11-28 18:19:56.177546748 +0000 UTC m=+4898.226467653" Nov 28 18:20:00 crc kubenswrapper[5024]: I1128 18:20:00.341318 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:20:00 crc kubenswrapper[5024]: I1128 18:20:00.341938 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:20:00 crc kubenswrapper[5024]: I1128 18:20:00.391856 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:20:01 crc kubenswrapper[5024]: I1128 18:20:01.944926 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:20:04 crc kubenswrapper[5024]: I1128 18:20:04.392607 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gb2dc"] Nov 28 18:20:04 crc kubenswrapper[5024]: I1128 18:20:04.459848 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gb2dc" podUID="2492edef-f4c9-409b-ae65-29c14c071833" containerName="registry-server" containerID="cri-o://4e5d4572b1255f9597ae13cadc306686c5aab0b12862e9b4261043f7fe4fea9d" gracePeriod=2 Nov 28 18:20:05 crc kubenswrapper[5024]: I1128 18:20:05.242071 5024 generic.go:334] "Generic (PLEG): container finished" podID="2492edef-f4c9-409b-ae65-29c14c071833" containerID="4e5d4572b1255f9597ae13cadc306686c5aab0b12862e9b4261043f7fe4fea9d" exitCode=0 Nov 28 18:20:05 crc kubenswrapper[5024]: I1128 18:20:05.242156 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gb2dc" event={"ID":"2492edef-f4c9-409b-ae65-29c14c071833","Type":"ContainerDied","Data":"4e5d4572b1255f9597ae13cadc306686c5aab0b12862e9b4261043f7fe4fea9d"} Nov 28 18:20:05 crc kubenswrapper[5024]: I1128 18:20:05.242452 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gb2dc" event={"ID":"2492edef-f4c9-409b-ae65-29c14c071833","Type":"ContainerDied","Data":"6433d1d287f38c5ea28a03adde17fe6081740c6227f096fd46787a1682e4201a"} Nov 28 18:20:05 crc kubenswrapper[5024]: I1128 18:20:05.242472 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6433d1d287f38c5ea28a03adde17fe6081740c6227f096fd46787a1682e4201a" Nov 28 18:20:05 crc kubenswrapper[5024]: I1128 18:20:05.305734 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:20:05 crc kubenswrapper[5024]: I1128 18:20:05.395675 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcmf8\" (UniqueName: \"kubernetes.io/projected/2492edef-f4c9-409b-ae65-29c14c071833-kube-api-access-tcmf8\") pod \"2492edef-f4c9-409b-ae65-29c14c071833\" (UID: \"2492edef-f4c9-409b-ae65-29c14c071833\") " Nov 28 18:20:05 crc kubenswrapper[5024]: I1128 18:20:05.396290 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2492edef-f4c9-409b-ae65-29c14c071833-utilities\") pod \"2492edef-f4c9-409b-ae65-29c14c071833\" (UID: \"2492edef-f4c9-409b-ae65-29c14c071833\") " Nov 28 18:20:05 crc kubenswrapper[5024]: I1128 18:20:05.396352 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2492edef-f4c9-409b-ae65-29c14c071833-catalog-content\") pod \"2492edef-f4c9-409b-ae65-29c14c071833\" (UID: \"2492edef-f4c9-409b-ae65-29c14c071833\") " Nov 28 18:20:05 crc kubenswrapper[5024]: I1128 18:20:05.397402 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2492edef-f4c9-409b-ae65-29c14c071833-utilities" (OuterVolumeSpecName: "utilities") pod "2492edef-f4c9-409b-ae65-29c14c071833" (UID: "2492edef-f4c9-409b-ae65-29c14c071833"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:20:05 crc kubenswrapper[5024]: I1128 18:20:05.410554 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2492edef-f4c9-409b-ae65-29c14c071833-kube-api-access-tcmf8" (OuterVolumeSpecName: "kube-api-access-tcmf8") pod "2492edef-f4c9-409b-ae65-29c14c071833" (UID: "2492edef-f4c9-409b-ae65-29c14c071833"). InnerVolumeSpecName "kube-api-access-tcmf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:20:05 crc kubenswrapper[5024]: I1128 18:20:05.454858 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2492edef-f4c9-409b-ae65-29c14c071833-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2492edef-f4c9-409b-ae65-29c14c071833" (UID: "2492edef-f4c9-409b-ae65-29c14c071833"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:20:05 crc kubenswrapper[5024]: I1128 18:20:05.499566 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2492edef-f4c9-409b-ae65-29c14c071833-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:20:05 crc kubenswrapper[5024]: I1128 18:20:05.499594 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2492edef-f4c9-409b-ae65-29c14c071833-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:20:05 crc kubenswrapper[5024]: I1128 18:20:05.499618 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcmf8\" (UniqueName: \"kubernetes.io/projected/2492edef-f4c9-409b-ae65-29c14c071833-kube-api-access-tcmf8\") on node \"crc\" DevicePath \"\"" Nov 28 18:20:06 crc kubenswrapper[5024]: I1128 18:20:06.251078 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gb2dc" Nov 28 18:20:06 crc kubenswrapper[5024]: I1128 18:20:06.296988 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gb2dc"] Nov 28 18:20:06 crc kubenswrapper[5024]: I1128 18:20:06.311342 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gb2dc"] Nov 28 18:20:06 crc kubenswrapper[5024]: I1128 18:20:06.514492 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2492edef-f4c9-409b-ae65-29c14c071833" path="/var/lib/kubelet/pods/2492edef-f4c9-409b-ae65-29c14c071833/volumes" Nov 28 18:21:10 crc kubenswrapper[5024]: I1128 18:21:10.712043 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6786944b4d-h88pn" podUID="2675bece-a200-49ea-a9b0-5e394ae7167d" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 18:22:07 crc kubenswrapper[5024]: I1128 18:22:07.565310 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:22:07 crc kubenswrapper[5024]: I1128 18:22:07.566965 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:22:37 crc kubenswrapper[5024]: I1128 18:22:37.565302 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:22:37 crc kubenswrapper[5024]: I1128 18:22:37.565779 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:23:07 crc kubenswrapper[5024]: I1128 18:23:07.564544 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:23:07 crc kubenswrapper[5024]: I1128 18:23:07.565278 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:23:07 crc kubenswrapper[5024]: I1128 18:23:07.565345 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 18:23:07 crc kubenswrapper[5024]: I1128 18:23:07.566511 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 18:23:07 crc kubenswrapper[5024]: I1128 18:23:07.566583 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" gracePeriod=600 Nov 28 18:23:07 crc kubenswrapper[5024]: E1128 18:23:07.688127 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:23:08 crc kubenswrapper[5024]: I1128 18:23:08.473813 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" exitCode=0 Nov 28 18:23:08 crc kubenswrapper[5024]: I1128 18:23:08.473908 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7"} Nov 28 18:23:08 crc kubenswrapper[5024]: I1128 18:23:08.474284 5024 scope.go:117] "RemoveContainer" containerID="d06361962fe509995d04df9a9542446ec780fadff703acb27501511c9c538a1c" Nov 28 18:23:08 crc kubenswrapper[5024]: I1128 18:23:08.475295 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:23:08 crc kubenswrapper[5024]: E1128 18:23:08.475808 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:23:09 crc kubenswrapper[5024]: I1128 18:23:09.561769 5024 scope.go:117] "RemoveContainer" containerID="ec00d09f83338dde11df14545851a157089ad0cf96cc4e7ee391da01bf84a18f" Nov 28 18:23:09 crc kubenswrapper[5024]: I1128 18:23:09.584218 5024 scope.go:117] "RemoveContainer" containerID="20f051a5103fc79c77b7917fee56596bfd51a25ff5e48e29f5cff912af15589a" Nov 28 18:23:09 crc kubenswrapper[5024]: I1128 18:23:09.611335 5024 scope.go:117] "RemoveContainer" containerID="094aff01649a1d1c73ed50ea196e1f9969a036f63ff5130c986add8ca7e81941" Nov 28 18:23:21 crc kubenswrapper[5024]: I1128 18:23:21.498412 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:23:21 crc kubenswrapper[5024]: E1128 18:23:21.499305 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:23:35 crc kubenswrapper[5024]: I1128 18:23:35.498468 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:23:35 crc kubenswrapper[5024]: E1128 18:23:35.500636 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:23:49 crc kubenswrapper[5024]: I1128 18:23:49.498234 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:23:49 crc kubenswrapper[5024]: E1128 18:23:49.499329 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:24:02 crc kubenswrapper[5024]: I1128 18:24:02.498423 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:24:02 crc kubenswrapper[5024]: E1128 18:24:02.499374 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:24:16 crc kubenswrapper[5024]: I1128 18:24:16.499289 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:24:16 crc kubenswrapper[5024]: E1128 18:24:16.500068 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:24:29 crc kubenswrapper[5024]: I1128 18:24:29.498120 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:24:29 crc kubenswrapper[5024]: E1128 18:24:29.499159 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:24:40 crc kubenswrapper[5024]: I1128 18:24:40.502082 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:24:40 crc kubenswrapper[5024]: E1128 18:24:40.502949 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:24:54 crc kubenswrapper[5024]: I1128 18:24:54.498497 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:24:54 crc kubenswrapper[5024]: E1128 18:24:54.499398 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:25:07 crc kubenswrapper[5024]: I1128 18:25:07.498148 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:25:07 crc kubenswrapper[5024]: E1128 18:25:07.498938 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:25:22 crc kubenswrapper[5024]: I1128 18:25:22.498702 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:25:22 crc kubenswrapper[5024]: E1128 18:25:22.499953 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:25:34 crc kubenswrapper[5024]: I1128 18:25:34.498806 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:25:34 crc kubenswrapper[5024]: E1128 18:25:34.499876 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:25:46 crc kubenswrapper[5024]: I1128 18:25:46.498349 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:25:46 crc kubenswrapper[5024]: E1128 18:25:46.499338 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:26:01 crc kubenswrapper[5024]: I1128 18:26:01.504485 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:26:01 crc kubenswrapper[5024]: E1128 18:26:01.506420 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:26:10 crc kubenswrapper[5024]: I1128 18:26:10.234650 5024 scope.go:117] "RemoveContainer" containerID="332f2828cfb00c2bbe55749e0348cee1bfc3faa4130a346c8e8be578e141e8c4" Nov 28 18:26:10 crc kubenswrapper[5024]: I1128 18:26:10.280464 5024 scope.go:117] "RemoveContainer" containerID="4e5d4572b1255f9597ae13cadc306686c5aab0b12862e9b4261043f7fe4fea9d" Nov 28 18:26:10 crc kubenswrapper[5024]: I1128 18:26:10.341781 5024 scope.go:117] "RemoveContainer" containerID="a84044729f6fbca46283556a0dfce0def3b731b6f15cb750d068ca1dccc8e20b" Nov 28 18:26:15 crc kubenswrapper[5024]: I1128 18:26:15.498814 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:26:15 crc kubenswrapper[5024]: E1128 18:26:15.500003 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:26:30 crc kubenswrapper[5024]: I1128 18:26:30.498410 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:26:30 crc kubenswrapper[5024]: E1128 18:26:30.499322 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:26:45 crc kubenswrapper[5024]: I1128 18:26:45.498558 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:26:45 crc kubenswrapper[5024]: E1128 18:26:45.499673 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:26:58 crc kubenswrapper[5024]: I1128 18:26:58.506620 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:26:58 crc kubenswrapper[5024]: E1128 18:26:58.507390 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:27:09 crc kubenswrapper[5024]: I1128 18:27:09.497616 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:27:09 crc kubenswrapper[5024]: E1128 18:27:09.499688 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:27:23 crc kubenswrapper[5024]: I1128 18:27:23.498235 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:27:23 crc kubenswrapper[5024]: E1128 18:27:23.499346 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:27:37 crc kubenswrapper[5024]: I1128 18:27:37.499581 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:27:37 crc kubenswrapper[5024]: E1128 18:27:37.500445 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:27:51 crc kubenswrapper[5024]: I1128 18:27:51.497985 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:27:51 crc kubenswrapper[5024]: E1128 18:27:51.498835 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:27:59 crc kubenswrapper[5024]: I1128 18:27:59.805468 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qzh8j"] Nov 28 18:27:59 crc kubenswrapper[5024]: E1128 18:27:59.806485 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2492edef-f4c9-409b-ae65-29c14c071833" containerName="extract-utilities" Nov 28 18:27:59 crc kubenswrapper[5024]: I1128 18:27:59.806506 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2492edef-f4c9-409b-ae65-29c14c071833" containerName="extract-utilities" Nov 28 18:27:59 crc kubenswrapper[5024]: E1128 18:27:59.806558 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2492edef-f4c9-409b-ae65-29c14c071833" containerName="registry-server" Nov 28 18:27:59 crc kubenswrapper[5024]: I1128 18:27:59.806566 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2492edef-f4c9-409b-ae65-29c14c071833" containerName="registry-server" Nov 28 18:27:59 crc kubenswrapper[5024]: E1128 18:27:59.806579 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2492edef-f4c9-409b-ae65-29c14c071833" containerName="extract-content" Nov 28 18:27:59 crc kubenswrapper[5024]: I1128 18:27:59.806585 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2492edef-f4c9-409b-ae65-29c14c071833" containerName="extract-content" Nov 28 18:27:59 crc kubenswrapper[5024]: I1128 18:27:59.806851 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2492edef-f4c9-409b-ae65-29c14c071833" containerName="registry-server" Nov 28 18:27:59 crc kubenswrapper[5024]: I1128 18:27:59.808698 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:27:59 crc kubenswrapper[5024]: I1128 18:27:59.830285 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qzh8j"] Nov 28 18:27:59 crc kubenswrapper[5024]: I1128 18:27:59.945131 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4r9x\" (UniqueName: \"kubernetes.io/projected/71946fbb-b90c-40e8-bee7-ab31bb718105-kube-api-access-l4r9x\") pod \"redhat-operators-qzh8j\" (UID: \"71946fbb-b90c-40e8-bee7-ab31bb718105\") " pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:27:59 crc kubenswrapper[5024]: I1128 18:27:59.945639 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71946fbb-b90c-40e8-bee7-ab31bb718105-catalog-content\") pod \"redhat-operators-qzh8j\" (UID: \"71946fbb-b90c-40e8-bee7-ab31bb718105\") " pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:27:59 crc kubenswrapper[5024]: I1128 18:27:59.945870 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71946fbb-b90c-40e8-bee7-ab31bb718105-utilities\") pod \"redhat-operators-qzh8j\" (UID: \"71946fbb-b90c-40e8-bee7-ab31bb718105\") " pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:28:00 crc kubenswrapper[5024]: I1128 18:28:00.048714 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71946fbb-b90c-40e8-bee7-ab31bb718105-utilities\") pod \"redhat-operators-qzh8j\" (UID: \"71946fbb-b90c-40e8-bee7-ab31bb718105\") " pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:28:00 crc kubenswrapper[5024]: I1128 18:28:00.048824 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4r9x\" (UniqueName: \"kubernetes.io/projected/71946fbb-b90c-40e8-bee7-ab31bb718105-kube-api-access-l4r9x\") pod \"redhat-operators-qzh8j\" (UID: \"71946fbb-b90c-40e8-bee7-ab31bb718105\") " pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:28:00 crc kubenswrapper[5024]: I1128 18:28:00.048952 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71946fbb-b90c-40e8-bee7-ab31bb718105-catalog-content\") pod \"redhat-operators-qzh8j\" (UID: \"71946fbb-b90c-40e8-bee7-ab31bb718105\") " pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:28:00 crc kubenswrapper[5024]: I1128 18:28:00.049385 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71946fbb-b90c-40e8-bee7-ab31bb718105-utilities\") pod \"redhat-operators-qzh8j\" (UID: \"71946fbb-b90c-40e8-bee7-ab31bb718105\") " pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:28:00 crc kubenswrapper[5024]: I1128 18:28:00.049408 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71946fbb-b90c-40e8-bee7-ab31bb718105-catalog-content\") pod \"redhat-operators-qzh8j\" (UID: \"71946fbb-b90c-40e8-bee7-ab31bb718105\") " pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:28:00 crc kubenswrapper[5024]: I1128 18:28:00.068895 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4r9x\" (UniqueName: \"kubernetes.io/projected/71946fbb-b90c-40e8-bee7-ab31bb718105-kube-api-access-l4r9x\") pod \"redhat-operators-qzh8j\" (UID: \"71946fbb-b90c-40e8-bee7-ab31bb718105\") " pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:28:00 crc kubenswrapper[5024]: I1128 18:28:00.129502 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:28:00 crc kubenswrapper[5024]: I1128 18:28:00.664599 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qzh8j"] Nov 28 18:28:01 crc kubenswrapper[5024]: I1128 18:28:01.399849 5024 generic.go:334] "Generic (PLEG): container finished" podID="71946fbb-b90c-40e8-bee7-ab31bb718105" containerID="a963913edec052e8df0af4abc7109e6b8a6080fc071f9115a5ae24f796a72f47" exitCode=0 Nov 28 18:28:01 crc kubenswrapper[5024]: I1128 18:28:01.399946 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzh8j" event={"ID":"71946fbb-b90c-40e8-bee7-ab31bb718105","Type":"ContainerDied","Data":"a963913edec052e8df0af4abc7109e6b8a6080fc071f9115a5ae24f796a72f47"} Nov 28 18:28:01 crc kubenswrapper[5024]: I1128 18:28:01.400185 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzh8j" event={"ID":"71946fbb-b90c-40e8-bee7-ab31bb718105","Type":"ContainerStarted","Data":"76e04da5bdc781ed8c160ff7f1714ac00e8113beb87edc211d31c270a2564f10"} Nov 28 18:28:01 crc kubenswrapper[5024]: I1128 18:28:01.402398 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 18:28:02 crc kubenswrapper[5024]: I1128 18:28:02.498004 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:28:02 crc kubenswrapper[5024]: E1128 18:28:02.499160 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:28:03 crc kubenswrapper[5024]: I1128 18:28:03.425405 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzh8j" event={"ID":"71946fbb-b90c-40e8-bee7-ab31bb718105","Type":"ContainerStarted","Data":"4233ec5ad56a8e6a2a2e85d0b5175c9c91f9d73e98798c8e7a10be8e350eaa70"} Nov 28 18:28:06 crc kubenswrapper[5024]: I1128 18:28:06.513535 5024 generic.go:334] "Generic (PLEG): container finished" podID="71946fbb-b90c-40e8-bee7-ab31bb718105" containerID="4233ec5ad56a8e6a2a2e85d0b5175c9c91f9d73e98798c8e7a10be8e350eaa70" exitCode=0 Nov 28 18:28:06 crc kubenswrapper[5024]: I1128 18:28:06.519151 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzh8j" event={"ID":"71946fbb-b90c-40e8-bee7-ab31bb718105","Type":"ContainerDied","Data":"4233ec5ad56a8e6a2a2e85d0b5175c9c91f9d73e98798c8e7a10be8e350eaa70"} Nov 28 18:28:07 crc kubenswrapper[5024]: I1128 18:28:07.526299 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzh8j" event={"ID":"71946fbb-b90c-40e8-bee7-ab31bb718105","Type":"ContainerStarted","Data":"a20ae1c642ee5d4004664683d4d5ab291c1b297832251ead6c7f05a14dfdac75"} Nov 28 18:28:10 crc kubenswrapper[5024]: I1128 18:28:10.129664 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:28:10 crc kubenswrapper[5024]: I1128 18:28:10.130348 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.188432 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qzh8j" podUID="71946fbb-b90c-40e8-bee7-ab31bb718105" containerName="registry-server" probeResult="failure" output=< Nov 28 18:28:11 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 18:28:11 crc kubenswrapper[5024]: > Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.574921 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qzh8j" podStartSLOduration=6.835681549 podStartE2EDuration="12.574885576s" podCreationTimestamp="2025-11-28 18:27:59 +0000 UTC" firstStartedPulling="2025-11-28 18:28:01.401976283 +0000 UTC m=+5383.450897188" lastFinishedPulling="2025-11-28 18:28:07.14118027 +0000 UTC m=+5389.190101215" observedRunningTime="2025-11-28 18:28:07.650673122 +0000 UTC m=+5389.699594027" watchObservedRunningTime="2025-11-28 18:28:11.574885576 +0000 UTC m=+5393.623806491" Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.588158 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bwjrm"] Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.591375 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.607352 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bwjrm"] Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.792234 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b85w4\" (UniqueName: \"kubernetes.io/projected/db226856-bdd9-4678-8b04-4358b6c464d1-kube-api-access-b85w4\") pod \"certified-operators-bwjrm\" (UID: \"db226856-bdd9-4678-8b04-4358b6c464d1\") " pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.792365 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db226856-bdd9-4678-8b04-4358b6c464d1-catalog-content\") pod \"certified-operators-bwjrm\" (UID: \"db226856-bdd9-4678-8b04-4358b6c464d1\") " pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.792474 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db226856-bdd9-4678-8b04-4358b6c464d1-utilities\") pod \"certified-operators-bwjrm\" (UID: \"db226856-bdd9-4678-8b04-4358b6c464d1\") " pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.894319 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db226856-bdd9-4678-8b04-4358b6c464d1-utilities\") pod \"certified-operators-bwjrm\" (UID: \"db226856-bdd9-4678-8b04-4358b6c464d1\") " pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.894513 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b85w4\" (UniqueName: \"kubernetes.io/projected/db226856-bdd9-4678-8b04-4358b6c464d1-kube-api-access-b85w4\") pod \"certified-operators-bwjrm\" (UID: \"db226856-bdd9-4678-8b04-4358b6c464d1\") " pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.894552 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db226856-bdd9-4678-8b04-4358b6c464d1-catalog-content\") pod \"certified-operators-bwjrm\" (UID: \"db226856-bdd9-4678-8b04-4358b6c464d1\") " pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.894861 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db226856-bdd9-4678-8b04-4358b6c464d1-utilities\") pod \"certified-operators-bwjrm\" (UID: \"db226856-bdd9-4678-8b04-4358b6c464d1\") " pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.895059 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db226856-bdd9-4678-8b04-4358b6c464d1-catalog-content\") pod \"certified-operators-bwjrm\" (UID: \"db226856-bdd9-4678-8b04-4358b6c464d1\") " pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.920794 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b85w4\" (UniqueName: \"kubernetes.io/projected/db226856-bdd9-4678-8b04-4358b6c464d1-kube-api-access-b85w4\") pod \"certified-operators-bwjrm\" (UID: \"db226856-bdd9-4678-8b04-4358b6c464d1\") " pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:11 crc kubenswrapper[5024]: I1128 18:28:11.923835 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:12 crc kubenswrapper[5024]: I1128 18:28:12.542205 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bwjrm"] Nov 28 18:28:12 crc kubenswrapper[5024]: I1128 18:28:12.605992 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwjrm" event={"ID":"db226856-bdd9-4678-8b04-4358b6c464d1","Type":"ContainerStarted","Data":"a983fcfee614dbf04d14bd3fe00ce4f904d4e892f1ff2a3f5a010571e82d33a8"} Nov 28 18:28:13 crc kubenswrapper[5024]: I1128 18:28:13.499588 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:28:13 crc kubenswrapper[5024]: I1128 18:28:13.617784 5024 generic.go:334] "Generic (PLEG): container finished" podID="db226856-bdd9-4678-8b04-4358b6c464d1" containerID="053f8669bc2d9467bdc58c57d8c9fa719ae72e12ccac25301a689c6729fcd406" exitCode=0 Nov 28 18:28:13 crc kubenswrapper[5024]: I1128 18:28:13.617825 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwjrm" event={"ID":"db226856-bdd9-4678-8b04-4358b6c464d1","Type":"ContainerDied","Data":"053f8669bc2d9467bdc58c57d8c9fa719ae72e12ccac25301a689c6729fcd406"} Nov 28 18:28:14 crc kubenswrapper[5024]: I1128 18:28:14.640569 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"12f4246b0b801d2f2b8b304991ac24b889477c8dfb6a5f2330e902c248321a44"} Nov 28 18:28:15 crc kubenswrapper[5024]: I1128 18:28:15.654607 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwjrm" event={"ID":"db226856-bdd9-4678-8b04-4358b6c464d1","Type":"ContainerStarted","Data":"80971fffd42739ba77135b6df25c102d7418eaa813e913f2faa0a5eeb5eb1f01"} Nov 28 18:28:16 crc kubenswrapper[5024]: I1128 18:28:16.672662 5024 generic.go:334] "Generic (PLEG): container finished" podID="db226856-bdd9-4678-8b04-4358b6c464d1" containerID="80971fffd42739ba77135b6df25c102d7418eaa813e913f2faa0a5eeb5eb1f01" exitCode=0 Nov 28 18:28:16 crc kubenswrapper[5024]: I1128 18:28:16.673018 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwjrm" event={"ID":"db226856-bdd9-4678-8b04-4358b6c464d1","Type":"ContainerDied","Data":"80971fffd42739ba77135b6df25c102d7418eaa813e913f2faa0a5eeb5eb1f01"} Nov 28 18:28:17 crc kubenswrapper[5024]: I1128 18:28:17.685822 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwjrm" event={"ID":"db226856-bdd9-4678-8b04-4358b6c464d1","Type":"ContainerStarted","Data":"fdf5fdb496ec97b961c54b3f93f3e4c7dbd3d37a5292484e9633b36def8ab4f4"} Nov 28 18:28:21 crc kubenswrapper[5024]: I1128 18:28:21.188635 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qzh8j" podUID="71946fbb-b90c-40e8-bee7-ab31bb718105" containerName="registry-server" probeResult="failure" output=< Nov 28 18:28:21 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 18:28:21 crc kubenswrapper[5024]: > Nov 28 18:28:21 crc kubenswrapper[5024]: I1128 18:28:21.924376 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:21 crc kubenswrapper[5024]: I1128 18:28:21.924740 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:22 crc kubenswrapper[5024]: I1128 18:28:22.987915 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bwjrm" podUID="db226856-bdd9-4678-8b04-4358b6c464d1" containerName="registry-server" probeResult="failure" output=< Nov 28 18:28:22 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 18:28:22 crc kubenswrapper[5024]: > Nov 28 18:28:30 crc kubenswrapper[5024]: I1128 18:28:30.177402 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:28:30 crc kubenswrapper[5024]: I1128 18:28:30.211391 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bwjrm" podStartSLOduration=15.481186528 podStartE2EDuration="19.211357783s" podCreationTimestamp="2025-11-28 18:28:11 +0000 UTC" firstStartedPulling="2025-11-28 18:28:13.620257634 +0000 UTC m=+5395.669178539" lastFinishedPulling="2025-11-28 18:28:17.350428889 +0000 UTC m=+5399.399349794" observedRunningTime="2025-11-28 18:28:17.716684083 +0000 UTC m=+5399.765604988" watchObservedRunningTime="2025-11-28 18:28:30.211357783 +0000 UTC m=+5412.260278708" Nov 28 18:28:30 crc kubenswrapper[5024]: I1128 18:28:30.236953 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:28:30 crc kubenswrapper[5024]: I1128 18:28:30.429134 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qzh8j"] Nov 28 18:28:31 crc kubenswrapper[5024]: I1128 18:28:31.853130 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qzh8j" podUID="71946fbb-b90c-40e8-bee7-ab31bb718105" containerName="registry-server" containerID="cri-o://a20ae1c642ee5d4004664683d4d5ab291c1b297832251ead6c7f05a14dfdac75" gracePeriod=2 Nov 28 18:28:31 crc kubenswrapper[5024]: I1128 18:28:31.980728 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.042915 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.433312 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.551504 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4r9x\" (UniqueName: \"kubernetes.io/projected/71946fbb-b90c-40e8-bee7-ab31bb718105-kube-api-access-l4r9x\") pod \"71946fbb-b90c-40e8-bee7-ab31bb718105\" (UID: \"71946fbb-b90c-40e8-bee7-ab31bb718105\") " Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.551856 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71946fbb-b90c-40e8-bee7-ab31bb718105-utilities\") pod \"71946fbb-b90c-40e8-bee7-ab31bb718105\" (UID: \"71946fbb-b90c-40e8-bee7-ab31bb718105\") " Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.551881 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71946fbb-b90c-40e8-bee7-ab31bb718105-catalog-content\") pod \"71946fbb-b90c-40e8-bee7-ab31bb718105\" (UID: \"71946fbb-b90c-40e8-bee7-ab31bb718105\") " Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.552739 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71946fbb-b90c-40e8-bee7-ab31bb718105-utilities" (OuterVolumeSpecName: "utilities") pod "71946fbb-b90c-40e8-bee7-ab31bb718105" (UID: "71946fbb-b90c-40e8-bee7-ab31bb718105"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.554531 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71946fbb-b90c-40e8-bee7-ab31bb718105-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.559561 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71946fbb-b90c-40e8-bee7-ab31bb718105-kube-api-access-l4r9x" (OuterVolumeSpecName: "kube-api-access-l4r9x") pod "71946fbb-b90c-40e8-bee7-ab31bb718105" (UID: "71946fbb-b90c-40e8-bee7-ab31bb718105"). InnerVolumeSpecName "kube-api-access-l4r9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.659301 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4r9x\" (UniqueName: \"kubernetes.io/projected/71946fbb-b90c-40e8-bee7-ab31bb718105-kube-api-access-l4r9x\") on node \"crc\" DevicePath \"\"" Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.661975 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71946fbb-b90c-40e8-bee7-ab31bb718105-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71946fbb-b90c-40e8-bee7-ab31bb718105" (UID: "71946fbb-b90c-40e8-bee7-ab31bb718105"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.761708 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71946fbb-b90c-40e8-bee7-ab31bb718105-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.867859 5024 generic.go:334] "Generic (PLEG): container finished" podID="71946fbb-b90c-40e8-bee7-ab31bb718105" containerID="a20ae1c642ee5d4004664683d4d5ab291c1b297832251ead6c7f05a14dfdac75" exitCode=0 Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.867932 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qzh8j" Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.867960 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzh8j" event={"ID":"71946fbb-b90c-40e8-bee7-ab31bb718105","Type":"ContainerDied","Data":"a20ae1c642ee5d4004664683d4d5ab291c1b297832251ead6c7f05a14dfdac75"} Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.868538 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzh8j" event={"ID":"71946fbb-b90c-40e8-bee7-ab31bb718105","Type":"ContainerDied","Data":"76e04da5bdc781ed8c160ff7f1714ac00e8113beb87edc211d31c270a2564f10"} Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.868588 5024 scope.go:117] "RemoveContainer" containerID="a20ae1c642ee5d4004664683d4d5ab291c1b297832251ead6c7f05a14dfdac75" Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.904662 5024 scope.go:117] "RemoveContainer" containerID="4233ec5ad56a8e6a2a2e85d0b5175c9c91f9d73e98798c8e7a10be8e350eaa70" Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.915760 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qzh8j"] Nov 28 18:28:32 crc kubenswrapper[5024]: I1128 18:28:32.928314 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qzh8j"] Nov 28 18:28:33 crc kubenswrapper[5024]: I1128 18:28:33.043925 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bwjrm"] Nov 28 18:28:33 crc kubenswrapper[5024]: I1128 18:28:33.507470 5024 scope.go:117] "RemoveContainer" containerID="a963913edec052e8df0af4abc7109e6b8a6080fc071f9115a5ae24f796a72f47" Nov 28 18:28:33 crc kubenswrapper[5024]: I1128 18:28:33.561282 5024 scope.go:117] "RemoveContainer" containerID="a20ae1c642ee5d4004664683d4d5ab291c1b297832251ead6c7f05a14dfdac75" Nov 28 18:28:33 crc kubenswrapper[5024]: E1128 18:28:33.563147 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20ae1c642ee5d4004664683d4d5ab291c1b297832251ead6c7f05a14dfdac75\": container with ID starting with a20ae1c642ee5d4004664683d4d5ab291c1b297832251ead6c7f05a14dfdac75 not found: ID does not exist" containerID="a20ae1c642ee5d4004664683d4d5ab291c1b297832251ead6c7f05a14dfdac75" Nov 28 18:28:33 crc kubenswrapper[5024]: I1128 18:28:33.563188 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20ae1c642ee5d4004664683d4d5ab291c1b297832251ead6c7f05a14dfdac75"} err="failed to get container status \"a20ae1c642ee5d4004664683d4d5ab291c1b297832251ead6c7f05a14dfdac75\": rpc error: code = NotFound desc = could not find container \"a20ae1c642ee5d4004664683d4d5ab291c1b297832251ead6c7f05a14dfdac75\": container with ID starting with a20ae1c642ee5d4004664683d4d5ab291c1b297832251ead6c7f05a14dfdac75 not found: ID does not exist" Nov 28 18:28:33 crc kubenswrapper[5024]: I1128 18:28:33.563238 5024 scope.go:117] "RemoveContainer" containerID="4233ec5ad56a8e6a2a2e85d0b5175c9c91f9d73e98798c8e7a10be8e350eaa70" Nov 28 18:28:33 crc kubenswrapper[5024]: E1128 18:28:33.563702 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4233ec5ad56a8e6a2a2e85d0b5175c9c91f9d73e98798c8e7a10be8e350eaa70\": container with ID starting with 4233ec5ad56a8e6a2a2e85d0b5175c9c91f9d73e98798c8e7a10be8e350eaa70 not found: ID does not exist" containerID="4233ec5ad56a8e6a2a2e85d0b5175c9c91f9d73e98798c8e7a10be8e350eaa70" Nov 28 18:28:33 crc kubenswrapper[5024]: I1128 18:28:33.563737 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4233ec5ad56a8e6a2a2e85d0b5175c9c91f9d73e98798c8e7a10be8e350eaa70"} err="failed to get container status \"4233ec5ad56a8e6a2a2e85d0b5175c9c91f9d73e98798c8e7a10be8e350eaa70\": rpc error: code = NotFound desc = could not find container \"4233ec5ad56a8e6a2a2e85d0b5175c9c91f9d73e98798c8e7a10be8e350eaa70\": container with ID starting with 4233ec5ad56a8e6a2a2e85d0b5175c9c91f9d73e98798c8e7a10be8e350eaa70 not found: ID does not exist" Nov 28 18:28:33 crc kubenswrapper[5024]: I1128 18:28:33.563776 5024 scope.go:117] "RemoveContainer" containerID="a963913edec052e8df0af4abc7109e6b8a6080fc071f9115a5ae24f796a72f47" Nov 28 18:28:33 crc kubenswrapper[5024]: E1128 18:28:33.564123 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a963913edec052e8df0af4abc7109e6b8a6080fc071f9115a5ae24f796a72f47\": container with ID starting with a963913edec052e8df0af4abc7109e6b8a6080fc071f9115a5ae24f796a72f47 not found: ID does not exist" containerID="a963913edec052e8df0af4abc7109e6b8a6080fc071f9115a5ae24f796a72f47" Nov 28 18:28:33 crc kubenswrapper[5024]: I1128 18:28:33.564167 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a963913edec052e8df0af4abc7109e6b8a6080fc071f9115a5ae24f796a72f47"} err="failed to get container status \"a963913edec052e8df0af4abc7109e6b8a6080fc071f9115a5ae24f796a72f47\": rpc error: code = NotFound desc = could not find container \"a963913edec052e8df0af4abc7109e6b8a6080fc071f9115a5ae24f796a72f47\": container with ID starting with a963913edec052e8df0af4abc7109e6b8a6080fc071f9115a5ae24f796a72f47 not found: ID does not exist" Nov 28 18:28:33 crc kubenswrapper[5024]: I1128 18:28:33.881733 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bwjrm" podUID="db226856-bdd9-4678-8b04-4358b6c464d1" containerName="registry-server" containerID="cri-o://fdf5fdb496ec97b961c54b3f93f3e4c7dbd3d37a5292484e9633b36def8ab4f4" gracePeriod=2 Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.491613 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.518094 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71946fbb-b90c-40e8-bee7-ab31bb718105" path="/var/lib/kubelet/pods/71946fbb-b90c-40e8-bee7-ab31bb718105/volumes" Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.620881 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db226856-bdd9-4678-8b04-4358b6c464d1-utilities\") pod \"db226856-bdd9-4678-8b04-4358b6c464d1\" (UID: \"db226856-bdd9-4678-8b04-4358b6c464d1\") " Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.620961 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db226856-bdd9-4678-8b04-4358b6c464d1-catalog-content\") pod \"db226856-bdd9-4678-8b04-4358b6c464d1\" (UID: \"db226856-bdd9-4678-8b04-4358b6c464d1\") " Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.621131 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b85w4\" (UniqueName: \"kubernetes.io/projected/db226856-bdd9-4678-8b04-4358b6c464d1-kube-api-access-b85w4\") pod \"db226856-bdd9-4678-8b04-4358b6c464d1\" (UID: \"db226856-bdd9-4678-8b04-4358b6c464d1\") " Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.622061 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db226856-bdd9-4678-8b04-4358b6c464d1-utilities" (OuterVolumeSpecName: "utilities") pod "db226856-bdd9-4678-8b04-4358b6c464d1" (UID: "db226856-bdd9-4678-8b04-4358b6c464d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.623195 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db226856-bdd9-4678-8b04-4358b6c464d1-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.637071 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db226856-bdd9-4678-8b04-4358b6c464d1-kube-api-access-b85w4" (OuterVolumeSpecName: "kube-api-access-b85w4") pod "db226856-bdd9-4678-8b04-4358b6c464d1" (UID: "db226856-bdd9-4678-8b04-4358b6c464d1"). InnerVolumeSpecName "kube-api-access-b85w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.692311 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db226856-bdd9-4678-8b04-4358b6c464d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db226856-bdd9-4678-8b04-4358b6c464d1" (UID: "db226856-bdd9-4678-8b04-4358b6c464d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.727060 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b85w4\" (UniqueName: \"kubernetes.io/projected/db226856-bdd9-4678-8b04-4358b6c464d1-kube-api-access-b85w4\") on node \"crc\" DevicePath \"\"" Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.727208 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db226856-bdd9-4678-8b04-4358b6c464d1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.897967 5024 generic.go:334] "Generic (PLEG): container finished" podID="db226856-bdd9-4678-8b04-4358b6c464d1" containerID="fdf5fdb496ec97b961c54b3f93f3e4c7dbd3d37a5292484e9633b36def8ab4f4" exitCode=0 Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.898035 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwjrm" event={"ID":"db226856-bdd9-4678-8b04-4358b6c464d1","Type":"ContainerDied","Data":"fdf5fdb496ec97b961c54b3f93f3e4c7dbd3d37a5292484e9633b36def8ab4f4"} Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.898402 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bwjrm" event={"ID":"db226856-bdd9-4678-8b04-4358b6c464d1","Type":"ContainerDied","Data":"a983fcfee614dbf04d14bd3fe00ce4f904d4e892f1ff2a3f5a010571e82d33a8"} Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.898429 5024 scope.go:117] "RemoveContainer" containerID="fdf5fdb496ec97b961c54b3f93f3e4c7dbd3d37a5292484e9633b36def8ab4f4" Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.898145 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bwjrm" Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.923892 5024 scope.go:117] "RemoveContainer" containerID="80971fffd42739ba77135b6df25c102d7418eaa813e913f2faa0a5eeb5eb1f01" Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.949143 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bwjrm"] Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.959636 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bwjrm"] Nov 28 18:28:34 crc kubenswrapper[5024]: I1128 18:28:34.967865 5024 scope.go:117] "RemoveContainer" containerID="053f8669bc2d9467bdc58c57d8c9fa719ae72e12ccac25301a689c6729fcd406" Nov 28 18:28:35 crc kubenswrapper[5024]: I1128 18:28:35.031923 5024 scope.go:117] "RemoveContainer" containerID="fdf5fdb496ec97b961c54b3f93f3e4c7dbd3d37a5292484e9633b36def8ab4f4" Nov 28 18:28:35 crc kubenswrapper[5024]: E1128 18:28:35.033446 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdf5fdb496ec97b961c54b3f93f3e4c7dbd3d37a5292484e9633b36def8ab4f4\": container with ID starting with fdf5fdb496ec97b961c54b3f93f3e4c7dbd3d37a5292484e9633b36def8ab4f4 not found: ID does not exist" containerID="fdf5fdb496ec97b961c54b3f93f3e4c7dbd3d37a5292484e9633b36def8ab4f4" Nov 28 18:28:35 crc kubenswrapper[5024]: I1128 18:28:35.033550 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdf5fdb496ec97b961c54b3f93f3e4c7dbd3d37a5292484e9633b36def8ab4f4"} err="failed to get container status \"fdf5fdb496ec97b961c54b3f93f3e4c7dbd3d37a5292484e9633b36def8ab4f4\": rpc error: code = NotFound desc = could not find container \"fdf5fdb496ec97b961c54b3f93f3e4c7dbd3d37a5292484e9633b36def8ab4f4\": container with ID starting with fdf5fdb496ec97b961c54b3f93f3e4c7dbd3d37a5292484e9633b36def8ab4f4 not found: ID does not exist" Nov 28 18:28:35 crc kubenswrapper[5024]: I1128 18:28:35.033637 5024 scope.go:117] "RemoveContainer" containerID="80971fffd42739ba77135b6df25c102d7418eaa813e913f2faa0a5eeb5eb1f01" Nov 28 18:28:35 crc kubenswrapper[5024]: E1128 18:28:35.034044 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80971fffd42739ba77135b6df25c102d7418eaa813e913f2faa0a5eeb5eb1f01\": container with ID starting with 80971fffd42739ba77135b6df25c102d7418eaa813e913f2faa0a5eeb5eb1f01 not found: ID does not exist" containerID="80971fffd42739ba77135b6df25c102d7418eaa813e913f2faa0a5eeb5eb1f01" Nov 28 18:28:35 crc kubenswrapper[5024]: I1128 18:28:35.034072 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80971fffd42739ba77135b6df25c102d7418eaa813e913f2faa0a5eeb5eb1f01"} err="failed to get container status \"80971fffd42739ba77135b6df25c102d7418eaa813e913f2faa0a5eeb5eb1f01\": rpc error: code = NotFound desc = could not find container \"80971fffd42739ba77135b6df25c102d7418eaa813e913f2faa0a5eeb5eb1f01\": container with ID starting with 80971fffd42739ba77135b6df25c102d7418eaa813e913f2faa0a5eeb5eb1f01 not found: ID does not exist" Nov 28 18:28:35 crc kubenswrapper[5024]: I1128 18:28:35.034087 5024 scope.go:117] "RemoveContainer" containerID="053f8669bc2d9467bdc58c57d8c9fa719ae72e12ccac25301a689c6729fcd406" Nov 28 18:28:35 crc kubenswrapper[5024]: E1128 18:28:35.034359 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"053f8669bc2d9467bdc58c57d8c9fa719ae72e12ccac25301a689c6729fcd406\": container with ID starting with 053f8669bc2d9467bdc58c57d8c9fa719ae72e12ccac25301a689c6729fcd406 not found: ID does not exist" containerID="053f8669bc2d9467bdc58c57d8c9fa719ae72e12ccac25301a689c6729fcd406" Nov 28 18:28:35 crc kubenswrapper[5024]: I1128 18:28:35.034465 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"053f8669bc2d9467bdc58c57d8c9fa719ae72e12ccac25301a689c6729fcd406"} err="failed to get container status \"053f8669bc2d9467bdc58c57d8c9fa719ae72e12ccac25301a689c6729fcd406\": rpc error: code = NotFound desc = could not find container \"053f8669bc2d9467bdc58c57d8c9fa719ae72e12ccac25301a689c6729fcd406\": container with ID starting with 053f8669bc2d9467bdc58c57d8c9fa719ae72e12ccac25301a689c6729fcd406 not found: ID does not exist" Nov 28 18:28:36 crc kubenswrapper[5024]: I1128 18:28:36.513412 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db226856-bdd9-4678-8b04-4358b6c464d1" path="/var/lib/kubelet/pods/db226856-bdd9-4678-8b04-4358b6c464d1/volumes" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.013488 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sg2g9"] Nov 28 18:28:48 crc kubenswrapper[5024]: E1128 18:28:48.015977 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71946fbb-b90c-40e8-bee7-ab31bb718105" containerName="extract-utilities" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.016098 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="71946fbb-b90c-40e8-bee7-ab31bb718105" containerName="extract-utilities" Nov 28 18:28:48 crc kubenswrapper[5024]: E1128 18:28:48.016210 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db226856-bdd9-4678-8b04-4358b6c464d1" containerName="extract-utilities" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.016297 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="db226856-bdd9-4678-8b04-4358b6c464d1" containerName="extract-utilities" Nov 28 18:28:48 crc kubenswrapper[5024]: E1128 18:28:48.016401 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db226856-bdd9-4678-8b04-4358b6c464d1" containerName="registry-server" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.016505 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="db226856-bdd9-4678-8b04-4358b6c464d1" containerName="registry-server" Nov 28 18:28:48 crc kubenswrapper[5024]: E1128 18:28:48.016602 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71946fbb-b90c-40e8-bee7-ab31bb718105" containerName="registry-server" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.016680 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="71946fbb-b90c-40e8-bee7-ab31bb718105" containerName="registry-server" Nov 28 18:28:48 crc kubenswrapper[5024]: E1128 18:28:48.016759 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71946fbb-b90c-40e8-bee7-ab31bb718105" containerName="extract-content" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.016833 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="71946fbb-b90c-40e8-bee7-ab31bb718105" containerName="extract-content" Nov 28 18:28:48 crc kubenswrapper[5024]: E1128 18:28:48.016941 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db226856-bdd9-4678-8b04-4358b6c464d1" containerName="extract-content" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.017015 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="db226856-bdd9-4678-8b04-4358b6c464d1" containerName="extract-content" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.017403 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="71946fbb-b90c-40e8-bee7-ab31bb718105" containerName="registry-server" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.017513 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="db226856-bdd9-4678-8b04-4358b6c464d1" containerName="registry-server" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.020131 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.052341 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg2g9"] Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.095493 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa8de61-093a-4808-bd8d-ba14430b9a17-catalog-content\") pod \"redhat-marketplace-sg2g9\" (UID: \"2aa8de61-093a-4808-bd8d-ba14430b9a17\") " pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.095571 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa8de61-093a-4808-bd8d-ba14430b9a17-utilities\") pod \"redhat-marketplace-sg2g9\" (UID: \"2aa8de61-093a-4808-bd8d-ba14430b9a17\") " pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.095659 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8rxm\" (UniqueName: \"kubernetes.io/projected/2aa8de61-093a-4808-bd8d-ba14430b9a17-kube-api-access-m8rxm\") pod \"redhat-marketplace-sg2g9\" (UID: \"2aa8de61-093a-4808-bd8d-ba14430b9a17\") " pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.198540 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa8de61-093a-4808-bd8d-ba14430b9a17-catalog-content\") pod \"redhat-marketplace-sg2g9\" (UID: \"2aa8de61-093a-4808-bd8d-ba14430b9a17\") " pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.198607 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa8de61-093a-4808-bd8d-ba14430b9a17-utilities\") pod \"redhat-marketplace-sg2g9\" (UID: \"2aa8de61-093a-4808-bd8d-ba14430b9a17\") " pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.198691 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8rxm\" (UniqueName: \"kubernetes.io/projected/2aa8de61-093a-4808-bd8d-ba14430b9a17-kube-api-access-m8rxm\") pod \"redhat-marketplace-sg2g9\" (UID: \"2aa8de61-093a-4808-bd8d-ba14430b9a17\") " pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.199157 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa8de61-093a-4808-bd8d-ba14430b9a17-catalog-content\") pod \"redhat-marketplace-sg2g9\" (UID: \"2aa8de61-093a-4808-bd8d-ba14430b9a17\") " pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.199252 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa8de61-093a-4808-bd8d-ba14430b9a17-utilities\") pod \"redhat-marketplace-sg2g9\" (UID: \"2aa8de61-093a-4808-bd8d-ba14430b9a17\") " pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.224559 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8rxm\" (UniqueName: \"kubernetes.io/projected/2aa8de61-093a-4808-bd8d-ba14430b9a17-kube-api-access-m8rxm\") pod \"redhat-marketplace-sg2g9\" (UID: \"2aa8de61-093a-4808-bd8d-ba14430b9a17\") " pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.356818 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:48 crc kubenswrapper[5024]: I1128 18:28:48.885930 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg2g9"] Nov 28 18:28:48 crc kubenswrapper[5024]: W1128 18:28:48.898581 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aa8de61_093a_4808_bd8d_ba14430b9a17.slice/crio-2ebb195ab56bb513f167b8952d80794e3b5fdd5d8c36e5471f3a0b69de55f7d3 WatchSource:0}: Error finding container 2ebb195ab56bb513f167b8952d80794e3b5fdd5d8c36e5471f3a0b69de55f7d3: Status 404 returned error can't find the container with id 2ebb195ab56bb513f167b8952d80794e3b5fdd5d8c36e5471f3a0b69de55f7d3 Nov 28 18:28:49 crc kubenswrapper[5024]: I1128 18:28:49.081224 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg2g9" event={"ID":"2aa8de61-093a-4808-bd8d-ba14430b9a17","Type":"ContainerStarted","Data":"2ebb195ab56bb513f167b8952d80794e3b5fdd5d8c36e5471f3a0b69de55f7d3"} Nov 28 18:28:50 crc kubenswrapper[5024]: I1128 18:28:50.095450 5024 generic.go:334] "Generic (PLEG): container finished" podID="2aa8de61-093a-4808-bd8d-ba14430b9a17" containerID="b1ed71d2572c353395f545ede4b64176a9f8784d536f53b50f775419ecded3c6" exitCode=0 Nov 28 18:28:50 crc kubenswrapper[5024]: I1128 18:28:50.095538 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg2g9" event={"ID":"2aa8de61-093a-4808-bd8d-ba14430b9a17","Type":"ContainerDied","Data":"b1ed71d2572c353395f545ede4b64176a9f8784d536f53b50f775419ecded3c6"} Nov 28 18:28:52 crc kubenswrapper[5024]: I1128 18:28:52.119755 5024 generic.go:334] "Generic (PLEG): container finished" podID="2aa8de61-093a-4808-bd8d-ba14430b9a17" containerID="4f3c69c7c829bd83504854d66299644b2eefef36a293d6385429cb6766d15844" exitCode=0 Nov 28 18:28:52 crc kubenswrapper[5024]: I1128 18:28:52.119837 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg2g9" event={"ID":"2aa8de61-093a-4808-bd8d-ba14430b9a17","Type":"ContainerDied","Data":"4f3c69c7c829bd83504854d66299644b2eefef36a293d6385429cb6766d15844"} Nov 28 18:28:53 crc kubenswrapper[5024]: I1128 18:28:53.138099 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg2g9" event={"ID":"2aa8de61-093a-4808-bd8d-ba14430b9a17","Type":"ContainerStarted","Data":"df25c5a0f4ff0265321139ed997551e4f252c823b0090e805384e9830a89cb11"} Nov 28 18:28:53 crc kubenswrapper[5024]: I1128 18:28:53.170309 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sg2g9" podStartSLOduration=3.55834007 podStartE2EDuration="6.170289429s" podCreationTimestamp="2025-11-28 18:28:47 +0000 UTC" firstStartedPulling="2025-11-28 18:28:50.098230348 +0000 UTC m=+5432.147151263" lastFinishedPulling="2025-11-28 18:28:52.710179717 +0000 UTC m=+5434.759100622" observedRunningTime="2025-11-28 18:28:53.16084458 +0000 UTC m=+5435.209765485" watchObservedRunningTime="2025-11-28 18:28:53.170289429 +0000 UTC m=+5435.219210334" Nov 28 18:28:58 crc kubenswrapper[5024]: I1128 18:28:58.357287 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:58 crc kubenswrapper[5024]: I1128 18:28:58.357938 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:58 crc kubenswrapper[5024]: I1128 18:28:58.826897 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:59 crc kubenswrapper[5024]: I1128 18:28:59.298673 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:28:59 crc kubenswrapper[5024]: I1128 18:28:59.355174 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg2g9"] Nov 28 18:29:01 crc kubenswrapper[5024]: I1128 18:29:01.265455 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sg2g9" podUID="2aa8de61-093a-4808-bd8d-ba14430b9a17" containerName="registry-server" containerID="cri-o://df25c5a0f4ff0265321139ed997551e4f252c823b0090e805384e9830a89cb11" gracePeriod=2 Nov 28 18:29:01 crc kubenswrapper[5024]: I1128 18:29:01.836240 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:29:01 crc kubenswrapper[5024]: I1128 18:29:01.960817 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8rxm\" (UniqueName: \"kubernetes.io/projected/2aa8de61-093a-4808-bd8d-ba14430b9a17-kube-api-access-m8rxm\") pod \"2aa8de61-093a-4808-bd8d-ba14430b9a17\" (UID: \"2aa8de61-093a-4808-bd8d-ba14430b9a17\") " Nov 28 18:29:01 crc kubenswrapper[5024]: I1128 18:29:01.960905 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa8de61-093a-4808-bd8d-ba14430b9a17-catalog-content\") pod \"2aa8de61-093a-4808-bd8d-ba14430b9a17\" (UID: \"2aa8de61-093a-4808-bd8d-ba14430b9a17\") " Nov 28 18:29:01 crc kubenswrapper[5024]: I1128 18:29:01.960929 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa8de61-093a-4808-bd8d-ba14430b9a17-utilities\") pod \"2aa8de61-093a-4808-bd8d-ba14430b9a17\" (UID: \"2aa8de61-093a-4808-bd8d-ba14430b9a17\") " Nov 28 18:29:01 crc kubenswrapper[5024]: I1128 18:29:01.962262 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa8de61-093a-4808-bd8d-ba14430b9a17-utilities" (OuterVolumeSpecName: "utilities") pod "2aa8de61-093a-4808-bd8d-ba14430b9a17" (UID: "2aa8de61-093a-4808-bd8d-ba14430b9a17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:29:01 crc kubenswrapper[5024]: I1128 18:29:01.967193 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa8de61-093a-4808-bd8d-ba14430b9a17-kube-api-access-m8rxm" (OuterVolumeSpecName: "kube-api-access-m8rxm") pod "2aa8de61-093a-4808-bd8d-ba14430b9a17" (UID: "2aa8de61-093a-4808-bd8d-ba14430b9a17"). InnerVolumeSpecName "kube-api-access-m8rxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:29:01 crc kubenswrapper[5024]: I1128 18:29:01.986667 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa8de61-093a-4808-bd8d-ba14430b9a17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2aa8de61-093a-4808-bd8d-ba14430b9a17" (UID: "2aa8de61-093a-4808-bd8d-ba14430b9a17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.062656 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa8de61-093a-4808-bd8d-ba14430b9a17-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.062902 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa8de61-093a-4808-bd8d-ba14430b9a17-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.062913 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8rxm\" (UniqueName: \"kubernetes.io/projected/2aa8de61-093a-4808-bd8d-ba14430b9a17-kube-api-access-m8rxm\") on node \"crc\" DevicePath \"\"" Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.280142 5024 generic.go:334] "Generic (PLEG): container finished" podID="2aa8de61-093a-4808-bd8d-ba14430b9a17" containerID="df25c5a0f4ff0265321139ed997551e4f252c823b0090e805384e9830a89cb11" exitCode=0 Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.280184 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg2g9" event={"ID":"2aa8de61-093a-4808-bd8d-ba14430b9a17","Type":"ContainerDied","Data":"df25c5a0f4ff0265321139ed997551e4f252c823b0090e805384e9830a89cb11"} Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.280214 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg2g9" event={"ID":"2aa8de61-093a-4808-bd8d-ba14430b9a17","Type":"ContainerDied","Data":"2ebb195ab56bb513f167b8952d80794e3b5fdd5d8c36e5471f3a0b69de55f7d3"} Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.280230 5024 scope.go:117] "RemoveContainer" containerID="df25c5a0f4ff0265321139ed997551e4f252c823b0090e805384e9830a89cb11" Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.280305 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg2g9" Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.355682 5024 scope.go:117] "RemoveContainer" containerID="4f3c69c7c829bd83504854d66299644b2eefef36a293d6385429cb6766d15844" Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.357839 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg2g9"] Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.382221 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg2g9"] Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.446123 5024 scope.go:117] "RemoveContainer" containerID="b1ed71d2572c353395f545ede4b64176a9f8784d536f53b50f775419ecded3c6" Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.496323 5024 scope.go:117] "RemoveContainer" containerID="df25c5a0f4ff0265321139ed997551e4f252c823b0090e805384e9830a89cb11" Nov 28 18:29:02 crc kubenswrapper[5024]: E1128 18:29:02.499508 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df25c5a0f4ff0265321139ed997551e4f252c823b0090e805384e9830a89cb11\": container with ID starting with df25c5a0f4ff0265321139ed997551e4f252c823b0090e805384e9830a89cb11 not found: ID does not exist" containerID="df25c5a0f4ff0265321139ed997551e4f252c823b0090e805384e9830a89cb11" Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.499548 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df25c5a0f4ff0265321139ed997551e4f252c823b0090e805384e9830a89cb11"} err="failed to get container status \"df25c5a0f4ff0265321139ed997551e4f252c823b0090e805384e9830a89cb11\": rpc error: code = NotFound desc = could not find container \"df25c5a0f4ff0265321139ed997551e4f252c823b0090e805384e9830a89cb11\": container with ID starting with df25c5a0f4ff0265321139ed997551e4f252c823b0090e805384e9830a89cb11 not found: ID does not exist" Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.499578 5024 scope.go:117] "RemoveContainer" containerID="4f3c69c7c829bd83504854d66299644b2eefef36a293d6385429cb6766d15844" Nov 28 18:29:02 crc kubenswrapper[5024]: E1128 18:29:02.500328 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f3c69c7c829bd83504854d66299644b2eefef36a293d6385429cb6766d15844\": container with ID starting with 4f3c69c7c829bd83504854d66299644b2eefef36a293d6385429cb6766d15844 not found: ID does not exist" containerID="4f3c69c7c829bd83504854d66299644b2eefef36a293d6385429cb6766d15844" Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.500365 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f3c69c7c829bd83504854d66299644b2eefef36a293d6385429cb6766d15844"} err="failed to get container status \"4f3c69c7c829bd83504854d66299644b2eefef36a293d6385429cb6766d15844\": rpc error: code = NotFound desc = could not find container \"4f3c69c7c829bd83504854d66299644b2eefef36a293d6385429cb6766d15844\": container with ID starting with 4f3c69c7c829bd83504854d66299644b2eefef36a293d6385429cb6766d15844 not found: ID does not exist" Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.500391 5024 scope.go:117] "RemoveContainer" containerID="b1ed71d2572c353395f545ede4b64176a9f8784d536f53b50f775419ecded3c6" Nov 28 18:29:02 crc kubenswrapper[5024]: E1128 18:29:02.508534 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ed71d2572c353395f545ede4b64176a9f8784d536f53b50f775419ecded3c6\": container with ID starting with b1ed71d2572c353395f545ede4b64176a9f8784d536f53b50f775419ecded3c6 not found: ID does not exist" containerID="b1ed71d2572c353395f545ede4b64176a9f8784d536f53b50f775419ecded3c6" Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.508581 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ed71d2572c353395f545ede4b64176a9f8784d536f53b50f775419ecded3c6"} err="failed to get container status \"b1ed71d2572c353395f545ede4b64176a9f8784d536f53b50f775419ecded3c6\": rpc error: code = NotFound desc = could not find container \"b1ed71d2572c353395f545ede4b64176a9f8784d536f53b50f775419ecded3c6\": container with ID starting with b1ed71d2572c353395f545ede4b64176a9f8784d536f53b50f775419ecded3c6 not found: ID does not exist" Nov 28 18:29:02 crc kubenswrapper[5024]: I1128 18:29:02.535511 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa8de61-093a-4808-bd8d-ba14430b9a17" path="/var/lib/kubelet/pods/2aa8de61-093a-4808-bd8d-ba14430b9a17/volumes" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.165279 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z"] Nov 28 18:30:00 crc kubenswrapper[5024]: E1128 18:30:00.166206 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa8de61-093a-4808-bd8d-ba14430b9a17" containerName="registry-server" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.166220 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa8de61-093a-4808-bd8d-ba14430b9a17" containerName="registry-server" Nov 28 18:30:00 crc kubenswrapper[5024]: E1128 18:30:00.166245 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa8de61-093a-4808-bd8d-ba14430b9a17" containerName="extract-content" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.166251 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa8de61-093a-4808-bd8d-ba14430b9a17" containerName="extract-content" Nov 28 18:30:00 crc kubenswrapper[5024]: E1128 18:30:00.166293 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa8de61-093a-4808-bd8d-ba14430b9a17" containerName="extract-utilities" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.166299 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa8de61-093a-4808-bd8d-ba14430b9a17" containerName="extract-utilities" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.166508 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa8de61-093a-4808-bd8d-ba14430b9a17" containerName="registry-server" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.167453 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.174538 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.179268 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.195822 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z"] Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.272384 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4rjh\" (UniqueName: \"kubernetes.io/projected/10d30b33-4eb2-4d69-a299-fdda8984a670-kube-api-access-n4rjh\") pod \"collect-profiles-29405910-c8c9z\" (UID: \"10d30b33-4eb2-4d69-a299-fdda8984a670\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.272821 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10d30b33-4eb2-4d69-a299-fdda8984a670-secret-volume\") pod \"collect-profiles-29405910-c8c9z\" (UID: \"10d30b33-4eb2-4d69-a299-fdda8984a670\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.273100 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10d30b33-4eb2-4d69-a299-fdda8984a670-config-volume\") pod \"collect-profiles-29405910-c8c9z\" (UID: \"10d30b33-4eb2-4d69-a299-fdda8984a670\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.375683 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4rjh\" (UniqueName: \"kubernetes.io/projected/10d30b33-4eb2-4d69-a299-fdda8984a670-kube-api-access-n4rjh\") pod \"collect-profiles-29405910-c8c9z\" (UID: \"10d30b33-4eb2-4d69-a299-fdda8984a670\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.375853 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10d30b33-4eb2-4d69-a299-fdda8984a670-secret-volume\") pod \"collect-profiles-29405910-c8c9z\" (UID: \"10d30b33-4eb2-4d69-a299-fdda8984a670\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.375959 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10d30b33-4eb2-4d69-a299-fdda8984a670-config-volume\") pod \"collect-profiles-29405910-c8c9z\" (UID: \"10d30b33-4eb2-4d69-a299-fdda8984a670\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.376883 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10d30b33-4eb2-4d69-a299-fdda8984a670-config-volume\") pod \"collect-profiles-29405910-c8c9z\" (UID: \"10d30b33-4eb2-4d69-a299-fdda8984a670\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.383158 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10d30b33-4eb2-4d69-a299-fdda8984a670-secret-volume\") pod \"collect-profiles-29405910-c8c9z\" (UID: \"10d30b33-4eb2-4d69-a299-fdda8984a670\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.402829 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4rjh\" (UniqueName: \"kubernetes.io/projected/10d30b33-4eb2-4d69-a299-fdda8984a670-kube-api-access-n4rjh\") pod \"collect-profiles-29405910-c8c9z\" (UID: \"10d30b33-4eb2-4d69-a299-fdda8984a670\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" Nov 28 18:30:00 crc kubenswrapper[5024]: I1128 18:30:00.500703 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" Nov 28 18:30:01 crc kubenswrapper[5024]: I1128 18:30:01.099515 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z"] Nov 28 18:30:01 crc kubenswrapper[5024]: I1128 18:30:01.996867 5024 generic.go:334] "Generic (PLEG): container finished" podID="10d30b33-4eb2-4d69-a299-fdda8984a670" containerID="c2b7b313fbe05026c069a317800ad4b0a18be45f6d4b57f2774d4ca5539aad98" exitCode=0 Nov 28 18:30:01 crc kubenswrapper[5024]: I1128 18:30:01.996922 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" event={"ID":"10d30b33-4eb2-4d69-a299-fdda8984a670","Type":"ContainerDied","Data":"c2b7b313fbe05026c069a317800ad4b0a18be45f6d4b57f2774d4ca5539aad98"} Nov 28 18:30:01 crc kubenswrapper[5024]: I1128 18:30:01.997462 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" event={"ID":"10d30b33-4eb2-4d69-a299-fdda8984a670","Type":"ContainerStarted","Data":"b0552cc4e9d46a1e729baecf1e9ad15908530b4740e2c61d743d53fa45b78976"} Nov 28 18:30:03 crc kubenswrapper[5024]: I1128 18:30:03.397926 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" Nov 28 18:30:03 crc kubenswrapper[5024]: I1128 18:30:03.557333 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10d30b33-4eb2-4d69-a299-fdda8984a670-secret-volume\") pod \"10d30b33-4eb2-4d69-a299-fdda8984a670\" (UID: \"10d30b33-4eb2-4d69-a299-fdda8984a670\") " Nov 28 18:30:03 crc kubenswrapper[5024]: I1128 18:30:03.557393 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10d30b33-4eb2-4d69-a299-fdda8984a670-config-volume\") pod \"10d30b33-4eb2-4d69-a299-fdda8984a670\" (UID: \"10d30b33-4eb2-4d69-a299-fdda8984a670\") " Nov 28 18:30:03 crc kubenswrapper[5024]: I1128 18:30:03.557673 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4rjh\" (UniqueName: \"kubernetes.io/projected/10d30b33-4eb2-4d69-a299-fdda8984a670-kube-api-access-n4rjh\") pod \"10d30b33-4eb2-4d69-a299-fdda8984a670\" (UID: \"10d30b33-4eb2-4d69-a299-fdda8984a670\") " Nov 28 18:30:03 crc kubenswrapper[5024]: I1128 18:30:03.558926 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10d30b33-4eb2-4d69-a299-fdda8984a670-config-volume" (OuterVolumeSpecName: "config-volume") pod "10d30b33-4eb2-4d69-a299-fdda8984a670" (UID: "10d30b33-4eb2-4d69-a299-fdda8984a670"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 18:30:03 crc kubenswrapper[5024]: I1128 18:30:03.563616 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10d30b33-4eb2-4d69-a299-fdda8984a670-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "10d30b33-4eb2-4d69-a299-fdda8984a670" (UID: "10d30b33-4eb2-4d69-a299-fdda8984a670"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 18:30:03 crc kubenswrapper[5024]: I1128 18:30:03.563680 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10d30b33-4eb2-4d69-a299-fdda8984a670-kube-api-access-n4rjh" (OuterVolumeSpecName: "kube-api-access-n4rjh") pod "10d30b33-4eb2-4d69-a299-fdda8984a670" (UID: "10d30b33-4eb2-4d69-a299-fdda8984a670"). InnerVolumeSpecName "kube-api-access-n4rjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:30:03 crc kubenswrapper[5024]: I1128 18:30:03.660468 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4rjh\" (UniqueName: \"kubernetes.io/projected/10d30b33-4eb2-4d69-a299-fdda8984a670-kube-api-access-n4rjh\") on node \"crc\" DevicePath \"\"" Nov 28 18:30:03 crc kubenswrapper[5024]: I1128 18:30:03.660503 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10d30b33-4eb2-4d69-a299-fdda8984a670-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 18:30:03 crc kubenswrapper[5024]: I1128 18:30:03.660518 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10d30b33-4eb2-4d69-a299-fdda8984a670-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 18:30:04 crc kubenswrapper[5024]: I1128 18:30:04.039251 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" event={"ID":"10d30b33-4eb2-4d69-a299-fdda8984a670","Type":"ContainerDied","Data":"b0552cc4e9d46a1e729baecf1e9ad15908530b4740e2c61d743d53fa45b78976"} Nov 28 18:30:04 crc kubenswrapper[5024]: I1128 18:30:04.039603 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0552cc4e9d46a1e729baecf1e9ad15908530b4740e2c61d743d53fa45b78976" Nov 28 18:30:04 crc kubenswrapper[5024]: I1128 18:30:04.039789 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405910-c8c9z" Nov 28 18:30:04 crc kubenswrapper[5024]: I1128 18:30:04.481502 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8"] Nov 28 18:30:04 crc kubenswrapper[5024]: I1128 18:30:04.491937 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405865-zxzp8"] Nov 28 18:30:04 crc kubenswrapper[5024]: I1128 18:30:04.510864 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6778bff2-d762-4d52-9833-248e57acab6e" path="/var/lib/kubelet/pods/6778bff2-d762-4d52-9833-248e57acab6e/volumes" Nov 28 18:30:10 crc kubenswrapper[5024]: I1128 18:30:10.575168 5024 scope.go:117] "RemoveContainer" containerID="2ba34b8bea593369d86fd6cb11ee0cfaed9b10c5ecbb5c2a48598033a2bcf63f" Nov 28 18:30:37 crc kubenswrapper[5024]: I1128 18:30:37.565146 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:30:37 crc kubenswrapper[5024]: I1128 18:30:37.565648 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:31:07 crc kubenswrapper[5024]: I1128 18:31:07.565003 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:31:07 crc kubenswrapper[5024]: I1128 18:31:07.565695 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:31:37 crc kubenswrapper[5024]: I1128 18:31:37.565015 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:31:37 crc kubenswrapper[5024]: I1128 18:31:37.565566 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:31:37 crc kubenswrapper[5024]: I1128 18:31:37.565618 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 18:31:37 crc kubenswrapper[5024]: I1128 18:31:37.566671 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"12f4246b0b801d2f2b8b304991ac24b889477c8dfb6a5f2330e902c248321a44"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 18:31:37 crc kubenswrapper[5024]: I1128 18:31:37.566734 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://12f4246b0b801d2f2b8b304991ac24b889477c8dfb6a5f2330e902c248321a44" gracePeriod=600 Nov 28 18:31:38 crc kubenswrapper[5024]: I1128 18:31:38.226371 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="12f4246b0b801d2f2b8b304991ac24b889477c8dfb6a5f2330e902c248321a44" exitCode=0 Nov 28 18:31:38 crc kubenswrapper[5024]: I1128 18:31:38.226448 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"12f4246b0b801d2f2b8b304991ac24b889477c8dfb6a5f2330e902c248321a44"} Nov 28 18:31:38 crc kubenswrapper[5024]: I1128 18:31:38.226901 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30"} Nov 28 18:31:38 crc kubenswrapper[5024]: I1128 18:31:38.226930 5024 scope.go:117] "RemoveContainer" containerID="4fc86e61f32541109397df449fcdcdc9ada50c9b00d0b6045ff3c031a21feab7" Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.228732 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wcrk6"] Nov 28 18:32:24 crc kubenswrapper[5024]: E1128 18:32:24.230321 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d30b33-4eb2-4d69-a299-fdda8984a670" containerName="collect-profiles" Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.230344 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d30b33-4eb2-4d69-a299-fdda8984a670" containerName="collect-profiles" Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.230656 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="10d30b33-4eb2-4d69-a299-fdda8984a670" containerName="collect-profiles" Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.232629 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.245827 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wcrk6"] Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.333319 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jwrj\" (UniqueName: \"kubernetes.io/projected/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-kube-api-access-5jwrj\") pod \"community-operators-wcrk6\" (UID: \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\") " pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.333708 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-utilities\") pod \"community-operators-wcrk6\" (UID: \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\") " pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.333752 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-catalog-content\") pod \"community-operators-wcrk6\" (UID: \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\") " pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.437134 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jwrj\" (UniqueName: \"kubernetes.io/projected/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-kube-api-access-5jwrj\") pod \"community-operators-wcrk6\" (UID: \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\") " pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.437246 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-utilities\") pod \"community-operators-wcrk6\" (UID: \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\") " pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.437296 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-catalog-content\") pod \"community-operators-wcrk6\" (UID: \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\") " pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.437883 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-catalog-content\") pod \"community-operators-wcrk6\" (UID: \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\") " pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.437973 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-utilities\") pod \"community-operators-wcrk6\" (UID: \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\") " pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.466371 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jwrj\" (UniqueName: \"kubernetes.io/projected/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-kube-api-access-5jwrj\") pod \"community-operators-wcrk6\" (UID: \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\") " pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:24 crc kubenswrapper[5024]: I1128 18:32:24.580286 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:25 crc kubenswrapper[5024]: I1128 18:32:25.065501 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wcrk6"] Nov 28 18:32:25 crc kubenswrapper[5024]: I1128 18:32:25.813457 5024 generic.go:334] "Generic (PLEG): container finished" podID="e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" containerID="d2b0ad4249cbae5cda13f595da658fd35def6ff876bdabc356fdafdcf8adfd6f" exitCode=0 Nov 28 18:32:25 crc kubenswrapper[5024]: I1128 18:32:25.813518 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcrk6" event={"ID":"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a","Type":"ContainerDied","Data":"d2b0ad4249cbae5cda13f595da658fd35def6ff876bdabc356fdafdcf8adfd6f"} Nov 28 18:32:25 crc kubenswrapper[5024]: I1128 18:32:25.813813 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcrk6" event={"ID":"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a","Type":"ContainerStarted","Data":"40c5d146afe1ece104176670bea99896cc705a0178c77e80de0bde9a505f926e"} Nov 28 18:32:27 crc kubenswrapper[5024]: I1128 18:32:27.851546 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcrk6" event={"ID":"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a","Type":"ContainerStarted","Data":"c11c1ad882dffb0edae1ee846b64c0743df5750feba172ac18b0ff1d25b9fcdf"} Nov 28 18:32:28 crc kubenswrapper[5024]: I1128 18:32:28.863921 5024 generic.go:334] "Generic (PLEG): container finished" podID="e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" containerID="c11c1ad882dffb0edae1ee846b64c0743df5750feba172ac18b0ff1d25b9fcdf" exitCode=0 Nov 28 18:32:28 crc kubenswrapper[5024]: I1128 18:32:28.864002 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcrk6" event={"ID":"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a","Type":"ContainerDied","Data":"c11c1ad882dffb0edae1ee846b64c0743df5750feba172ac18b0ff1d25b9fcdf"} Nov 28 18:32:29 crc kubenswrapper[5024]: I1128 18:32:29.886265 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcrk6" event={"ID":"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a","Type":"ContainerStarted","Data":"a73ad0d6250e1c196ea6fd57576504dd28ce6617f24cf3735f723eb650b40543"} Nov 28 18:32:34 crc kubenswrapper[5024]: I1128 18:32:34.580928 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:34 crc kubenswrapper[5024]: I1128 18:32:34.581533 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:34 crc kubenswrapper[5024]: I1128 18:32:34.640245 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:34 crc kubenswrapper[5024]: I1128 18:32:34.677057 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wcrk6" podStartSLOduration=7.077606892 podStartE2EDuration="10.676983625s" podCreationTimestamp="2025-11-28 18:32:24 +0000 UTC" firstStartedPulling="2025-11-28 18:32:25.815767171 +0000 UTC m=+5647.864688076" lastFinishedPulling="2025-11-28 18:32:29.415143904 +0000 UTC m=+5651.464064809" observedRunningTime="2025-11-28 18:32:29.912746376 +0000 UTC m=+5651.961667321" watchObservedRunningTime="2025-11-28 18:32:34.676983625 +0000 UTC m=+5656.725904540" Nov 28 18:32:35 crc kubenswrapper[5024]: I1128 18:32:35.021101 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:38 crc kubenswrapper[5024]: I1128 18:32:38.210866 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wcrk6"] Nov 28 18:32:38 crc kubenswrapper[5024]: I1128 18:32:38.211553 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wcrk6" podUID="e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" containerName="registry-server" containerID="cri-o://a73ad0d6250e1c196ea6fd57576504dd28ce6617f24cf3735f723eb650b40543" gracePeriod=2 Nov 28 18:32:39 crc kubenswrapper[5024]: I1128 18:32:39.021836 5024 generic.go:334] "Generic (PLEG): container finished" podID="e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" containerID="a73ad0d6250e1c196ea6fd57576504dd28ce6617f24cf3735f723eb650b40543" exitCode=0 Nov 28 18:32:39 crc kubenswrapper[5024]: I1128 18:32:39.022145 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcrk6" event={"ID":"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a","Type":"ContainerDied","Data":"a73ad0d6250e1c196ea6fd57576504dd28ce6617f24cf3735f723eb650b40543"} Nov 28 18:32:39 crc kubenswrapper[5024]: I1128 18:32:39.424476 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:39 crc kubenswrapper[5024]: I1128 18:32:39.552693 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-catalog-content\") pod \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\" (UID: \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\") " Nov 28 18:32:39 crc kubenswrapper[5024]: I1128 18:32:39.552910 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-utilities\") pod \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\" (UID: \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\") " Nov 28 18:32:39 crc kubenswrapper[5024]: I1128 18:32:39.553283 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jwrj\" (UniqueName: \"kubernetes.io/projected/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-kube-api-access-5jwrj\") pod \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\" (UID: \"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a\") " Nov 28 18:32:39 crc kubenswrapper[5024]: I1128 18:32:39.553874 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-utilities" (OuterVolumeSpecName: "utilities") pod "e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" (UID: "e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:32:39 crc kubenswrapper[5024]: I1128 18:32:39.554446 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:32:39 crc kubenswrapper[5024]: I1128 18:32:39.559928 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-kube-api-access-5jwrj" (OuterVolumeSpecName: "kube-api-access-5jwrj") pod "e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" (UID: "e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a"). InnerVolumeSpecName "kube-api-access-5jwrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:32:39 crc kubenswrapper[5024]: I1128 18:32:39.604216 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" (UID: "e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:32:39 crc kubenswrapper[5024]: I1128 18:32:39.656604 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:32:39 crc kubenswrapper[5024]: I1128 18:32:39.656640 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jwrj\" (UniqueName: \"kubernetes.io/projected/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a-kube-api-access-5jwrj\") on node \"crc\" DevicePath \"\"" Nov 28 18:32:40 crc kubenswrapper[5024]: I1128 18:32:40.055265 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcrk6" event={"ID":"e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a","Type":"ContainerDied","Data":"40c5d146afe1ece104176670bea99896cc705a0178c77e80de0bde9a505f926e"} Nov 28 18:32:40 crc kubenswrapper[5024]: I1128 18:32:40.055361 5024 scope.go:117] "RemoveContainer" containerID="a73ad0d6250e1c196ea6fd57576504dd28ce6617f24cf3735f723eb650b40543" Nov 28 18:32:40 crc kubenswrapper[5024]: I1128 18:32:40.055395 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcrk6" Nov 28 18:32:40 crc kubenswrapper[5024]: I1128 18:32:40.107510 5024 scope.go:117] "RemoveContainer" containerID="c11c1ad882dffb0edae1ee846b64c0743df5750feba172ac18b0ff1d25b9fcdf" Nov 28 18:32:40 crc kubenswrapper[5024]: I1128 18:32:40.124587 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wcrk6"] Nov 28 18:32:40 crc kubenswrapper[5024]: I1128 18:32:40.138227 5024 scope.go:117] "RemoveContainer" containerID="d2b0ad4249cbae5cda13f595da658fd35def6ff876bdabc356fdafdcf8adfd6f" Nov 28 18:32:40 crc kubenswrapper[5024]: I1128 18:32:40.146253 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wcrk6"] Nov 28 18:32:40 crc kubenswrapper[5024]: I1128 18:32:40.508935 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" path="/var/lib/kubelet/pods/e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a/volumes" Nov 28 18:33:37 crc kubenswrapper[5024]: I1128 18:33:37.565949 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:33:37 crc kubenswrapper[5024]: I1128 18:33:37.566549 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:33:37 crc kubenswrapper[5024]: I1128 18:33:37.784830 5024 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="27bdb46e-71e8-41d7-b796-b10d95025f95" containerName="galera" probeResult="failure" output="command timed out" Nov 28 18:34:07 crc kubenswrapper[5024]: I1128 18:34:07.564924 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:34:07 crc kubenswrapper[5024]: I1128 18:34:07.565575 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:34:37 crc kubenswrapper[5024]: I1128 18:34:37.565258 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:34:37 crc kubenswrapper[5024]: I1128 18:34:37.565728 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:34:37 crc kubenswrapper[5024]: I1128 18:34:37.565771 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 18:34:37 crc kubenswrapper[5024]: I1128 18:34:37.566664 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 18:34:37 crc kubenswrapper[5024]: I1128 18:34:37.566717 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" gracePeriod=600 Nov 28 18:34:37 crc kubenswrapper[5024]: E1128 18:34:37.687354 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:34:38 crc kubenswrapper[5024]: I1128 18:34:38.603097 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" exitCode=0 Nov 28 18:34:38 crc kubenswrapper[5024]: I1128 18:34:38.603482 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30"} Nov 28 18:34:38 crc kubenswrapper[5024]: I1128 18:34:38.603516 5024 scope.go:117] "RemoveContainer" containerID="12f4246b0b801d2f2b8b304991ac24b889477c8dfb6a5f2330e902c248321a44" Nov 28 18:34:38 crc kubenswrapper[5024]: I1128 18:34:38.604330 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:34:38 crc kubenswrapper[5024]: E1128 18:34:38.604632 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:34:53 crc kubenswrapper[5024]: I1128 18:34:53.498845 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:34:53 crc kubenswrapper[5024]: E1128 18:34:53.499837 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:35:04 crc kubenswrapper[5024]: I1128 18:35:04.499071 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:35:04 crc kubenswrapper[5024]: E1128 18:35:04.499860 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:35:20 crc kubenswrapper[5024]: I1128 18:35:20.499355 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:35:20 crc kubenswrapper[5024]: E1128 18:35:20.500619 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:35:33 crc kubenswrapper[5024]: I1128 18:35:33.499431 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:35:33 crc kubenswrapper[5024]: E1128 18:35:33.500389 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:35:46 crc kubenswrapper[5024]: I1128 18:35:46.499385 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:35:46 crc kubenswrapper[5024]: E1128 18:35:46.500175 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:35:58 crc kubenswrapper[5024]: I1128 18:35:58.526611 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:35:58 crc kubenswrapper[5024]: E1128 18:35:58.527786 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:36:10 crc kubenswrapper[5024]: I1128 18:36:10.504797 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:36:10 crc kubenswrapper[5024]: E1128 18:36:10.505621 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:36:23 crc kubenswrapper[5024]: I1128 18:36:23.499169 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:36:23 crc kubenswrapper[5024]: E1128 18:36:23.500332 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:36:37 crc kubenswrapper[5024]: I1128 18:36:37.516416 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:36:37 crc kubenswrapper[5024]: E1128 18:36:37.517262 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:36:46 crc kubenswrapper[5024]: I1128 18:36:46.208519 5024 generic.go:334] "Generic (PLEG): container finished" podID="38ea9d2b-3972-4bda-9cdd-c341334be5d1" containerID="52e1c50b3865b7b1de80ca1ff53eb39c6b5738fc00f50caea2adf9e9ddb3a4f2" exitCode=0 Nov 28 18:36:46 crc kubenswrapper[5024]: I1128 18:36:46.208659 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"38ea9d2b-3972-4bda-9cdd-c341334be5d1","Type":"ContainerDied","Data":"52e1c50b3865b7b1de80ca1ff53eb39c6b5738fc00f50caea2adf9e9ddb3a4f2"} Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.629980 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.635275 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-ssh-key\") pod \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.635563 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38ea9d2b-3972-4bda-9cdd-c341334be5d1-config-data\") pod \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.635701 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-ca-certs\") pod \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.635901 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm6hh\" (UniqueName: \"kubernetes.io/projected/38ea9d2b-3972-4bda-9cdd-c341334be5d1-kube-api-access-cm6hh\") pod \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.636152 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/38ea9d2b-3972-4bda-9cdd-c341334be5d1-test-operator-ephemeral-temporary\") pod \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.636375 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-openstack-config-secret\") pod \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.636431 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38ea9d2b-3972-4bda-9cdd-c341334be5d1-config-data" (OuterVolumeSpecName: "config-data") pod "38ea9d2b-3972-4bda-9cdd-c341334be5d1" (UID: "38ea9d2b-3972-4bda-9cdd-c341334be5d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.636511 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.636618 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38ea9d2b-3972-4bda-9cdd-c341334be5d1-openstack-config\") pod \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.636731 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38ea9d2b-3972-4bda-9cdd-c341334be5d1-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "38ea9d2b-3972-4bda-9cdd-c341334be5d1" (UID: "38ea9d2b-3972-4bda-9cdd-c341334be5d1"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.636762 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/38ea9d2b-3972-4bda-9cdd-c341334be5d1-test-operator-ephemeral-workdir\") pod \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\" (UID: \"38ea9d2b-3972-4bda-9cdd-c341334be5d1\") " Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.637584 5024 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38ea9d2b-3972-4bda-9cdd-c341334be5d1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.637673 5024 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/38ea9d2b-3972-4bda-9cdd-c341334be5d1-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.642346 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38ea9d2b-3972-4bda-9cdd-c341334be5d1-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "38ea9d2b-3972-4bda-9cdd-c341334be5d1" (UID: "38ea9d2b-3972-4bda-9cdd-c341334be5d1"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.645462 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38ea9d2b-3972-4bda-9cdd-c341334be5d1-kube-api-access-cm6hh" (OuterVolumeSpecName: "kube-api-access-cm6hh") pod "38ea9d2b-3972-4bda-9cdd-c341334be5d1" (UID: "38ea9d2b-3972-4bda-9cdd-c341334be5d1"). InnerVolumeSpecName "kube-api-access-cm6hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.658519 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "38ea9d2b-3972-4bda-9cdd-c341334be5d1" (UID: "38ea9d2b-3972-4bda-9cdd-c341334be5d1"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.695697 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "38ea9d2b-3972-4bda-9cdd-c341334be5d1" (UID: "38ea9d2b-3972-4bda-9cdd-c341334be5d1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.697283 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "38ea9d2b-3972-4bda-9cdd-c341334be5d1" (UID: "38ea9d2b-3972-4bda-9cdd-c341334be5d1"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.709152 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "38ea9d2b-3972-4bda-9cdd-c341334be5d1" (UID: "38ea9d2b-3972-4bda-9cdd-c341334be5d1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.740428 5024 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.740467 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm6hh\" (UniqueName: \"kubernetes.io/projected/38ea9d2b-3972-4bda-9cdd-c341334be5d1-kube-api-access-cm6hh\") on node \"crc\" DevicePath \"\"" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.740481 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.740510 5024 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.740527 5024 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/38ea9d2b-3972-4bda-9cdd-c341334be5d1-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.740537 5024 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38ea9d2b-3972-4bda-9cdd-c341334be5d1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.767901 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38ea9d2b-3972-4bda-9cdd-c341334be5d1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "38ea9d2b-3972-4bda-9cdd-c341334be5d1" (UID: "38ea9d2b-3972-4bda-9cdd-c341334be5d1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.768299 5024 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.842957 5024 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 28 18:36:47 crc kubenswrapper[5024]: I1128 18:36:47.842987 5024 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38ea9d2b-3972-4bda-9cdd-c341334be5d1-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 28 18:36:48 crc kubenswrapper[5024]: I1128 18:36:48.234194 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"38ea9d2b-3972-4bda-9cdd-c341334be5d1","Type":"ContainerDied","Data":"00ada76ea9d32754f0b7e47f40b9b1a634f4741e46be1b578248f223a2a4bab7"} Nov 28 18:36:48 crc kubenswrapper[5024]: I1128 18:36:48.234271 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 18:36:48 crc kubenswrapper[5024]: I1128 18:36:48.234646 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00ada76ea9d32754f0b7e47f40b9b1a634f4741e46be1b578248f223a2a4bab7" Nov 28 18:36:52 crc kubenswrapper[5024]: I1128 18:36:52.499104 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:36:52 crc kubenswrapper[5024]: E1128 18:36:52.499776 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.645193 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 18:36:55 crc kubenswrapper[5024]: E1128 18:36:55.646184 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" containerName="extract-utilities" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.646278 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" containerName="extract-utilities" Nov 28 18:36:55 crc kubenswrapper[5024]: E1128 18:36:55.646309 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38ea9d2b-3972-4bda-9cdd-c341334be5d1" containerName="tempest-tests-tempest-tests-runner" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.646319 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="38ea9d2b-3972-4bda-9cdd-c341334be5d1" containerName="tempest-tests-tempest-tests-runner" Nov 28 18:36:55 crc kubenswrapper[5024]: E1128 18:36:55.646353 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" containerName="extract-content" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.646363 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" containerName="extract-content" Nov 28 18:36:55 crc kubenswrapper[5024]: E1128 18:36:55.646374 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" containerName="registry-server" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.646381 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" containerName="registry-server" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.646678 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="38ea9d2b-3972-4bda-9cdd-c341334be5d1" containerName="tempest-tests-tempest-tests-runner" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.646718 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="e73a906d-9d2e-47cd-9f9f-5eb4995d5b6a" containerName="registry-server" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.649545 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.653351 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-l7s8n" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.666996 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.733429 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b1fff600-22cd-4f7e-bc4c-f666a06c01bb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.733598 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx26f\" (UniqueName: \"kubernetes.io/projected/b1fff600-22cd-4f7e-bc4c-f666a06c01bb-kube-api-access-bx26f\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b1fff600-22cd-4f7e-bc4c-f666a06c01bb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.835355 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b1fff600-22cd-4f7e-bc4c-f666a06c01bb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.835480 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx26f\" (UniqueName: \"kubernetes.io/projected/b1fff600-22cd-4f7e-bc4c-f666a06c01bb-kube-api-access-bx26f\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b1fff600-22cd-4f7e-bc4c-f666a06c01bb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.836321 5024 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b1fff600-22cd-4f7e-bc4c-f666a06c01bb\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.857707 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx26f\" (UniqueName: \"kubernetes.io/projected/b1fff600-22cd-4f7e-bc4c-f666a06c01bb-kube-api-access-bx26f\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b1fff600-22cd-4f7e-bc4c-f666a06c01bb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 18:36:55 crc kubenswrapper[5024]: I1128 18:36:55.867474 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b1fff600-22cd-4f7e-bc4c-f666a06c01bb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 18:36:56 crc kubenswrapper[5024]: I1128 18:36:56.017332 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 18:36:56 crc kubenswrapper[5024]: I1128 18:36:56.478375 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 18:36:56 crc kubenswrapper[5024]: I1128 18:36:56.482471 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 18:36:57 crc kubenswrapper[5024]: I1128 18:36:57.410814 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b1fff600-22cd-4f7e-bc4c-f666a06c01bb","Type":"ContainerStarted","Data":"955cf3018465781652452249f2e7ae74e458bfd28752ed34fe011785d875ad81"} Nov 28 18:36:58 crc kubenswrapper[5024]: I1128 18:36:58.422854 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b1fff600-22cd-4f7e-bc4c-f666a06c01bb","Type":"ContainerStarted","Data":"deecc92450f269f7135fc8fd481facfbe0c003fdb0bc8c78af3e4cc78e7d0ae2"} Nov 28 18:36:58 crc kubenswrapper[5024]: I1128 18:36:58.438131 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.995237175 podStartE2EDuration="3.438104047s" podCreationTimestamp="2025-11-28 18:36:55 +0000 UTC" firstStartedPulling="2025-11-28 18:36:56.481670946 +0000 UTC m=+5918.530591851" lastFinishedPulling="2025-11-28 18:36:57.924537828 +0000 UTC m=+5919.973458723" observedRunningTime="2025-11-28 18:36:58.437472959 +0000 UTC m=+5920.486393864" watchObservedRunningTime="2025-11-28 18:36:58.438104047 +0000 UTC m=+5920.487024952" Nov 28 18:37:05 crc kubenswrapper[5024]: I1128 18:37:05.498758 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:37:05 crc kubenswrapper[5024]: E1128 18:37:05.499541 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:37:20 crc kubenswrapper[5024]: I1128 18:37:20.498012 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:37:20 crc kubenswrapper[5024]: E1128 18:37:20.498900 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:37:32 crc kubenswrapper[5024]: I1128 18:37:32.499841 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:37:32 crc kubenswrapper[5024]: E1128 18:37:32.500963 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:37:44 crc kubenswrapper[5024]: I1128 18:37:44.750086 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l89ps/must-gather-jf5kt"] Nov 28 18:37:44 crc kubenswrapper[5024]: I1128 18:37:44.753207 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/must-gather-jf5kt" Nov 28 18:37:44 crc kubenswrapper[5024]: I1128 18:37:44.758630 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-l89ps"/"default-dockercfg-82rvf" Nov 28 18:37:44 crc kubenswrapper[5024]: I1128 18:37:44.759068 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-l89ps"/"openshift-service-ca.crt" Nov 28 18:37:44 crc kubenswrapper[5024]: I1128 18:37:44.759170 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-l89ps"/"kube-root-ca.crt" Nov 28 18:37:44 crc kubenswrapper[5024]: I1128 18:37:44.779518 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l89ps/must-gather-jf5kt"] Nov 28 18:37:44 crc kubenswrapper[5024]: I1128 18:37:44.884548 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrrl6\" (UniqueName: \"kubernetes.io/projected/309a17c4-130c-4a0e-aa80-7c6254a0f2a4-kube-api-access-mrrl6\") pod \"must-gather-jf5kt\" (UID: \"309a17c4-130c-4a0e-aa80-7c6254a0f2a4\") " pod="openshift-must-gather-l89ps/must-gather-jf5kt" Nov 28 18:37:44 crc kubenswrapper[5024]: I1128 18:37:44.884660 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/309a17c4-130c-4a0e-aa80-7c6254a0f2a4-must-gather-output\") pod \"must-gather-jf5kt\" (UID: \"309a17c4-130c-4a0e-aa80-7c6254a0f2a4\") " pod="openshift-must-gather-l89ps/must-gather-jf5kt" Nov 28 18:37:44 crc kubenswrapper[5024]: I1128 18:37:44.987002 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/309a17c4-130c-4a0e-aa80-7c6254a0f2a4-must-gather-output\") pod \"must-gather-jf5kt\" (UID: \"309a17c4-130c-4a0e-aa80-7c6254a0f2a4\") " pod="openshift-must-gather-l89ps/must-gather-jf5kt" Nov 28 18:37:44 crc kubenswrapper[5024]: I1128 18:37:44.987563 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrrl6\" (UniqueName: \"kubernetes.io/projected/309a17c4-130c-4a0e-aa80-7c6254a0f2a4-kube-api-access-mrrl6\") pod \"must-gather-jf5kt\" (UID: \"309a17c4-130c-4a0e-aa80-7c6254a0f2a4\") " pod="openshift-must-gather-l89ps/must-gather-jf5kt" Nov 28 18:37:44 crc kubenswrapper[5024]: I1128 18:37:44.987922 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/309a17c4-130c-4a0e-aa80-7c6254a0f2a4-must-gather-output\") pod \"must-gather-jf5kt\" (UID: \"309a17c4-130c-4a0e-aa80-7c6254a0f2a4\") " pod="openshift-must-gather-l89ps/must-gather-jf5kt" Nov 28 18:37:45 crc kubenswrapper[5024]: I1128 18:37:45.005686 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrrl6\" (UniqueName: \"kubernetes.io/projected/309a17c4-130c-4a0e-aa80-7c6254a0f2a4-kube-api-access-mrrl6\") pod \"must-gather-jf5kt\" (UID: \"309a17c4-130c-4a0e-aa80-7c6254a0f2a4\") " pod="openshift-must-gather-l89ps/must-gather-jf5kt" Nov 28 18:37:45 crc kubenswrapper[5024]: I1128 18:37:45.083567 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/must-gather-jf5kt" Nov 28 18:37:45 crc kubenswrapper[5024]: I1128 18:37:45.497832 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:37:45 crc kubenswrapper[5024]: E1128 18:37:45.498515 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:37:45 crc kubenswrapper[5024]: I1128 18:37:45.546599 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l89ps/must-gather-jf5kt"] Nov 28 18:37:45 crc kubenswrapper[5024]: W1128 18:37:45.551928 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod309a17c4_130c_4a0e_aa80_7c6254a0f2a4.slice/crio-e0dfcf6b3561fc8b0e7ecfd57d5f299f3bd12eaa96242418824c60b8e039332d WatchSource:0}: Error finding container e0dfcf6b3561fc8b0e7ecfd57d5f299f3bd12eaa96242418824c60b8e039332d: Status 404 returned error can't find the container with id e0dfcf6b3561fc8b0e7ecfd57d5f299f3bd12eaa96242418824c60b8e039332d Nov 28 18:37:46 crc kubenswrapper[5024]: I1128 18:37:46.114331 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l89ps/must-gather-jf5kt" event={"ID":"309a17c4-130c-4a0e-aa80-7c6254a0f2a4","Type":"ContainerStarted","Data":"e0dfcf6b3561fc8b0e7ecfd57d5f299f3bd12eaa96242418824c60b8e039332d"} Nov 28 18:37:51 crc kubenswrapper[5024]: I1128 18:37:51.227648 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l89ps/must-gather-jf5kt" event={"ID":"309a17c4-130c-4a0e-aa80-7c6254a0f2a4","Type":"ContainerStarted","Data":"a5bc0a93b08454cff277013188e04899984a2292195692f971779b924613b7c5"} Nov 28 18:37:51 crc kubenswrapper[5024]: I1128 18:37:51.229382 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l89ps/must-gather-jf5kt" event={"ID":"309a17c4-130c-4a0e-aa80-7c6254a0f2a4","Type":"ContainerStarted","Data":"67afe77f250ad6e177cf8100c42c2f85afc7b3dad7fd3a23d123dd386f84fc8f"} Nov 28 18:37:51 crc kubenswrapper[5024]: I1128 18:37:51.256925 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l89ps/must-gather-jf5kt" podStartSLOduration=2.209090252 podStartE2EDuration="7.256907635s" podCreationTimestamp="2025-11-28 18:37:44 +0000 UTC" firstStartedPulling="2025-11-28 18:37:45.554306733 +0000 UTC m=+5967.603227638" lastFinishedPulling="2025-11-28 18:37:50.602124116 +0000 UTC m=+5972.651045021" observedRunningTime="2025-11-28 18:37:51.251916262 +0000 UTC m=+5973.300837167" watchObservedRunningTime="2025-11-28 18:37:51.256907635 +0000 UTC m=+5973.305828530" Nov 28 18:37:55 crc kubenswrapper[5024]: E1128 18:37:55.042780 5024 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.141:42468->38.129.56.141:40169: write tcp 38.129.56.141:42468->38.129.56.141:40169: write: connection reset by peer Nov 28 18:37:55 crc kubenswrapper[5024]: I1128 18:37:55.864927 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l89ps/crc-debug-2q2d5"] Nov 28 18:37:55 crc kubenswrapper[5024]: I1128 18:37:55.867456 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/crc-debug-2q2d5" Nov 28 18:37:55 crc kubenswrapper[5024]: I1128 18:37:55.903069 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16fce34c-83cd-4b75-8aca-5f267aa01a8c-host\") pod \"crc-debug-2q2d5\" (UID: \"16fce34c-83cd-4b75-8aca-5f267aa01a8c\") " pod="openshift-must-gather-l89ps/crc-debug-2q2d5" Nov 28 18:37:55 crc kubenswrapper[5024]: I1128 18:37:55.903367 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxttt\" (UniqueName: \"kubernetes.io/projected/16fce34c-83cd-4b75-8aca-5f267aa01a8c-kube-api-access-cxttt\") pod \"crc-debug-2q2d5\" (UID: \"16fce34c-83cd-4b75-8aca-5f267aa01a8c\") " pod="openshift-must-gather-l89ps/crc-debug-2q2d5" Nov 28 18:37:56 crc kubenswrapper[5024]: I1128 18:37:56.005683 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16fce34c-83cd-4b75-8aca-5f267aa01a8c-host\") pod \"crc-debug-2q2d5\" (UID: \"16fce34c-83cd-4b75-8aca-5f267aa01a8c\") " pod="openshift-must-gather-l89ps/crc-debug-2q2d5" Nov 28 18:37:56 crc kubenswrapper[5024]: I1128 18:37:56.005899 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxttt\" (UniqueName: \"kubernetes.io/projected/16fce34c-83cd-4b75-8aca-5f267aa01a8c-kube-api-access-cxttt\") pod \"crc-debug-2q2d5\" (UID: \"16fce34c-83cd-4b75-8aca-5f267aa01a8c\") " pod="openshift-must-gather-l89ps/crc-debug-2q2d5" Nov 28 18:37:56 crc kubenswrapper[5024]: I1128 18:37:56.006398 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16fce34c-83cd-4b75-8aca-5f267aa01a8c-host\") pod \"crc-debug-2q2d5\" (UID: \"16fce34c-83cd-4b75-8aca-5f267aa01a8c\") " pod="openshift-must-gather-l89ps/crc-debug-2q2d5" Nov 28 18:37:56 crc kubenswrapper[5024]: I1128 18:37:56.027739 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxttt\" (UniqueName: \"kubernetes.io/projected/16fce34c-83cd-4b75-8aca-5f267aa01a8c-kube-api-access-cxttt\") pod \"crc-debug-2q2d5\" (UID: \"16fce34c-83cd-4b75-8aca-5f267aa01a8c\") " pod="openshift-must-gather-l89ps/crc-debug-2q2d5" Nov 28 18:37:56 crc kubenswrapper[5024]: I1128 18:37:56.195958 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/crc-debug-2q2d5" Nov 28 18:37:56 crc kubenswrapper[5024]: I1128 18:37:56.296932 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l89ps/crc-debug-2q2d5" event={"ID":"16fce34c-83cd-4b75-8aca-5f267aa01a8c","Type":"ContainerStarted","Data":"4454a2fd27787708da74f65f79554a73de98092c1027b1de2f1cf963b90d957f"} Nov 28 18:38:00 crc kubenswrapper[5024]: I1128 18:38:00.499496 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:38:00 crc kubenswrapper[5024]: E1128 18:38:00.501320 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:38:12 crc kubenswrapper[5024]: I1128 18:38:12.498304 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:38:12 crc kubenswrapper[5024]: E1128 18:38:12.499066 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:38:12 crc kubenswrapper[5024]: I1128 18:38:12.563131 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l89ps/crc-debug-2q2d5" event={"ID":"16fce34c-83cd-4b75-8aca-5f267aa01a8c","Type":"ContainerStarted","Data":"07cf9c4eff298992655f4349da12b63066f04153d4b3755438db208a41e0be6d"} Nov 28 18:38:12 crc kubenswrapper[5024]: I1128 18:38:12.578015 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l89ps/crc-debug-2q2d5" podStartSLOduration=2.077326509 podStartE2EDuration="17.577993885s" podCreationTimestamp="2025-11-28 18:37:55 +0000 UTC" firstStartedPulling="2025-11-28 18:37:56.238720594 +0000 UTC m=+5978.287641499" lastFinishedPulling="2025-11-28 18:38:11.73938797 +0000 UTC m=+5993.788308875" observedRunningTime="2025-11-28 18:38:12.574894307 +0000 UTC m=+5994.623815212" watchObservedRunningTime="2025-11-28 18:38:12.577993885 +0000 UTC m=+5994.626914800" Nov 28 18:38:23 crc kubenswrapper[5024]: I1128 18:38:23.498332 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:38:23 crc kubenswrapper[5024]: E1128 18:38:23.499067 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:38:38 crc kubenswrapper[5024]: I1128 18:38:38.510565 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:38:38 crc kubenswrapper[5024]: E1128 18:38:38.511649 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:38:47 crc kubenswrapper[5024]: I1128 18:38:47.121946 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b6pvk"] Nov 28 18:38:47 crc kubenswrapper[5024]: I1128 18:38:47.125748 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:38:47 crc kubenswrapper[5024]: I1128 18:38:47.150734 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b6pvk"] Nov 28 18:38:47 crc kubenswrapper[5024]: I1128 18:38:47.202374 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f2f22d-175d-4e45-a63b-82558fb12878-utilities\") pod \"redhat-operators-b6pvk\" (UID: \"f2f2f22d-175d-4e45-a63b-82558fb12878\") " pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:38:47 crc kubenswrapper[5024]: I1128 18:38:47.202483 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg5gm\" (UniqueName: \"kubernetes.io/projected/f2f2f22d-175d-4e45-a63b-82558fb12878-kube-api-access-hg5gm\") pod \"redhat-operators-b6pvk\" (UID: \"f2f2f22d-175d-4e45-a63b-82558fb12878\") " pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:38:47 crc kubenswrapper[5024]: I1128 18:38:47.202725 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f2f22d-175d-4e45-a63b-82558fb12878-catalog-content\") pod \"redhat-operators-b6pvk\" (UID: \"f2f2f22d-175d-4e45-a63b-82558fb12878\") " pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:38:47 crc kubenswrapper[5024]: I1128 18:38:47.304606 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f2f22d-175d-4e45-a63b-82558fb12878-catalog-content\") pod \"redhat-operators-b6pvk\" (UID: \"f2f2f22d-175d-4e45-a63b-82558fb12878\") " pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:38:47 crc kubenswrapper[5024]: I1128 18:38:47.304772 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f2f22d-175d-4e45-a63b-82558fb12878-utilities\") pod \"redhat-operators-b6pvk\" (UID: \"f2f2f22d-175d-4e45-a63b-82558fb12878\") " pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:38:47 crc kubenswrapper[5024]: I1128 18:38:47.304825 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg5gm\" (UniqueName: \"kubernetes.io/projected/f2f2f22d-175d-4e45-a63b-82558fb12878-kube-api-access-hg5gm\") pod \"redhat-operators-b6pvk\" (UID: \"f2f2f22d-175d-4e45-a63b-82558fb12878\") " pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:38:47 crc kubenswrapper[5024]: I1128 18:38:47.305432 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f2f22d-175d-4e45-a63b-82558fb12878-catalog-content\") pod \"redhat-operators-b6pvk\" (UID: \"f2f2f22d-175d-4e45-a63b-82558fb12878\") " pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:38:47 crc kubenswrapper[5024]: I1128 18:38:47.305810 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f2f22d-175d-4e45-a63b-82558fb12878-utilities\") pod \"redhat-operators-b6pvk\" (UID: \"f2f2f22d-175d-4e45-a63b-82558fb12878\") " pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:38:47 crc kubenswrapper[5024]: I1128 18:38:47.332311 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg5gm\" (UniqueName: \"kubernetes.io/projected/f2f2f22d-175d-4e45-a63b-82558fb12878-kube-api-access-hg5gm\") pod \"redhat-operators-b6pvk\" (UID: \"f2f2f22d-175d-4e45-a63b-82558fb12878\") " pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:38:47 crc kubenswrapper[5024]: I1128 18:38:47.460243 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:38:48 crc kubenswrapper[5024]: I1128 18:38:48.351617 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b6pvk"] Nov 28 18:38:49 crc kubenswrapper[5024]: I1128 18:38:49.156812 5024 generic.go:334] "Generic (PLEG): container finished" podID="f2f2f22d-175d-4e45-a63b-82558fb12878" containerID="2b602bd10b21516691674c969563d6c0f881132aeb6d9a850784c740dc965cee" exitCode=0 Nov 28 18:38:49 crc kubenswrapper[5024]: I1128 18:38:49.157158 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6pvk" event={"ID":"f2f2f22d-175d-4e45-a63b-82558fb12878","Type":"ContainerDied","Data":"2b602bd10b21516691674c969563d6c0f881132aeb6d9a850784c740dc965cee"} Nov 28 18:38:49 crc kubenswrapper[5024]: I1128 18:38:49.157204 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6pvk" event={"ID":"f2f2f22d-175d-4e45-a63b-82558fb12878","Type":"ContainerStarted","Data":"50cab107873e76b931e0859fa47910d3d82717699725daf3f57900f6ae27a7b7"} Nov 28 18:38:51 crc kubenswrapper[5024]: I1128 18:38:51.206524 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6pvk" event={"ID":"f2f2f22d-175d-4e45-a63b-82558fb12878","Type":"ContainerStarted","Data":"1d4c0aa37341b7bf29efd27ddff6a717cb3bab2ed3dad41aa135a3fb28a0f223"} Nov 28 18:38:52 crc kubenswrapper[5024]: I1128 18:38:52.500322 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:38:52 crc kubenswrapper[5024]: E1128 18:38:52.500928 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:38:54 crc kubenswrapper[5024]: E1128 18:38:54.446285 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f2f22d_175d_4e45_a63b_82558fb12878.slice/crio-1d4c0aa37341b7bf29efd27ddff6a717cb3bab2ed3dad41aa135a3fb28a0f223.scope\": RecentStats: unable to find data in memory cache]" Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.381493 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9dc8n"] Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.385269 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.405820 5024 generic.go:334] "Generic (PLEG): container finished" podID="f2f2f22d-175d-4e45-a63b-82558fb12878" containerID="1d4c0aa37341b7bf29efd27ddff6a717cb3bab2ed3dad41aa135a3fb28a0f223" exitCode=0 Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.405866 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6pvk" event={"ID":"f2f2f22d-175d-4e45-a63b-82558fb12878","Type":"ContainerDied","Data":"1d4c0aa37341b7bf29efd27ddff6a717cb3bab2ed3dad41aa135a3fb28a0f223"} Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.436602 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dc8n"] Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.571919 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw6c6\" (UniqueName: \"kubernetes.io/projected/1b8438d1-e8c8-4679-bc2b-0b220415de11-kube-api-access-hw6c6\") pod \"certified-operators-9dc8n\" (UID: \"1b8438d1-e8c8-4679-bc2b-0b220415de11\") " pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.572479 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8438d1-e8c8-4679-bc2b-0b220415de11-catalog-content\") pod \"certified-operators-9dc8n\" (UID: \"1b8438d1-e8c8-4679-bc2b-0b220415de11\") " pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.572738 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8438d1-e8c8-4679-bc2b-0b220415de11-utilities\") pod \"certified-operators-9dc8n\" (UID: \"1b8438d1-e8c8-4679-bc2b-0b220415de11\") " pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.674438 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8438d1-e8c8-4679-bc2b-0b220415de11-catalog-content\") pod \"certified-operators-9dc8n\" (UID: \"1b8438d1-e8c8-4679-bc2b-0b220415de11\") " pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.674621 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8438d1-e8c8-4679-bc2b-0b220415de11-utilities\") pod \"certified-operators-9dc8n\" (UID: \"1b8438d1-e8c8-4679-bc2b-0b220415de11\") " pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.674685 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw6c6\" (UniqueName: \"kubernetes.io/projected/1b8438d1-e8c8-4679-bc2b-0b220415de11-kube-api-access-hw6c6\") pod \"certified-operators-9dc8n\" (UID: \"1b8438d1-e8c8-4679-bc2b-0b220415de11\") " pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.674998 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8438d1-e8c8-4679-bc2b-0b220415de11-catalog-content\") pod \"certified-operators-9dc8n\" (UID: \"1b8438d1-e8c8-4679-bc2b-0b220415de11\") " pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.676455 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8438d1-e8c8-4679-bc2b-0b220415de11-utilities\") pod \"certified-operators-9dc8n\" (UID: \"1b8438d1-e8c8-4679-bc2b-0b220415de11\") " pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.694705 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw6c6\" (UniqueName: \"kubernetes.io/projected/1b8438d1-e8c8-4679-bc2b-0b220415de11-kube-api-access-hw6c6\") pod \"certified-operators-9dc8n\" (UID: \"1b8438d1-e8c8-4679-bc2b-0b220415de11\") " pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:38:55 crc kubenswrapper[5024]: I1128 18:38:55.731741 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:38:56 crc kubenswrapper[5024]: I1128 18:38:56.298484 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dc8n"] Nov 28 18:38:56 crc kubenswrapper[5024]: I1128 18:38:56.420512 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dc8n" event={"ID":"1b8438d1-e8c8-4679-bc2b-0b220415de11","Type":"ContainerStarted","Data":"e23b280a839790cf59b5fd602a2bf78d7c552439343958c1392d1d137caf612f"} Nov 28 18:38:57 crc kubenswrapper[5024]: I1128 18:38:57.437044 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6pvk" event={"ID":"f2f2f22d-175d-4e45-a63b-82558fb12878","Type":"ContainerStarted","Data":"3ec872d9700d9a2d001ca5e5f7caee950fd2690203b80629e9a3771a0122d508"} Nov 28 18:38:57 crc kubenswrapper[5024]: I1128 18:38:57.443359 5024 generic.go:334] "Generic (PLEG): container finished" podID="1b8438d1-e8c8-4679-bc2b-0b220415de11" containerID="c914974ff34c578863776fc299ce9276b63485ebf066b45120d86a5a3089faea" exitCode=0 Nov 28 18:38:57 crc kubenswrapper[5024]: I1128 18:38:57.443424 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dc8n" event={"ID":"1b8438d1-e8c8-4679-bc2b-0b220415de11","Type":"ContainerDied","Data":"c914974ff34c578863776fc299ce9276b63485ebf066b45120d86a5a3089faea"} Nov 28 18:38:57 crc kubenswrapper[5024]: I1128 18:38:57.461756 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:38:57 crc kubenswrapper[5024]: I1128 18:38:57.462087 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:38:57 crc kubenswrapper[5024]: I1128 18:38:57.462248 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b6pvk" podStartSLOduration=2.949403969 podStartE2EDuration="10.462230968s" podCreationTimestamp="2025-11-28 18:38:47 +0000 UTC" firstStartedPulling="2025-11-28 18:38:49.167846152 +0000 UTC m=+6031.216767057" lastFinishedPulling="2025-11-28 18:38:56.680673151 +0000 UTC m=+6038.729594056" observedRunningTime="2025-11-28 18:38:57.461578339 +0000 UTC m=+6039.510499294" watchObservedRunningTime="2025-11-28 18:38:57.462230968 +0000 UTC m=+6039.511151873" Nov 28 18:38:58 crc kubenswrapper[5024]: I1128 18:38:58.458109 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dc8n" event={"ID":"1b8438d1-e8c8-4679-bc2b-0b220415de11","Type":"ContainerStarted","Data":"dc43d881459d63d4b2825da384487472a3ed955652cfd40bf30d116b2913b36b"} Nov 28 18:38:58 crc kubenswrapper[5024]: I1128 18:38:58.524085 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b6pvk" podUID="f2f2f22d-175d-4e45-a63b-82558fb12878" containerName="registry-server" probeResult="failure" output=< Nov 28 18:38:58 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 18:38:58 crc kubenswrapper[5024]: > Nov 28 18:39:00 crc kubenswrapper[5024]: I1128 18:39:00.498380 5024 generic.go:334] "Generic (PLEG): container finished" podID="1b8438d1-e8c8-4679-bc2b-0b220415de11" containerID="dc43d881459d63d4b2825da384487472a3ed955652cfd40bf30d116b2913b36b" exitCode=0 Nov 28 18:39:00 crc kubenswrapper[5024]: I1128 18:39:00.511596 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dc8n" event={"ID":"1b8438d1-e8c8-4679-bc2b-0b220415de11","Type":"ContainerDied","Data":"dc43d881459d63d4b2825da384487472a3ed955652cfd40bf30d116b2913b36b"} Nov 28 18:39:01 crc kubenswrapper[5024]: I1128 18:39:01.523534 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dc8n" event={"ID":"1b8438d1-e8c8-4679-bc2b-0b220415de11","Type":"ContainerStarted","Data":"53d2093845728e6fbdc3941cb0d249027c6f09e73d4a2f4a47f1df67c1033036"} Nov 28 18:39:01 crc kubenswrapper[5024]: I1128 18:39:01.549929 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9dc8n" podStartSLOduration=3.061445979 podStartE2EDuration="6.549908567s" podCreationTimestamp="2025-11-28 18:38:55 +0000 UTC" firstStartedPulling="2025-11-28 18:38:57.446412256 +0000 UTC m=+6039.495333161" lastFinishedPulling="2025-11-28 18:39:00.934874844 +0000 UTC m=+6042.983795749" observedRunningTime="2025-11-28 18:39:01.542230238 +0000 UTC m=+6043.591151143" watchObservedRunningTime="2025-11-28 18:39:01.549908567 +0000 UTC m=+6043.598829472" Nov 28 18:39:04 crc kubenswrapper[5024]: I1128 18:39:04.503509 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:39:04 crc kubenswrapper[5024]: E1128 18:39:04.504147 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:39:05 crc kubenswrapper[5024]: I1128 18:39:05.733151 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:39:05 crc kubenswrapper[5024]: I1128 18:39:05.733460 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:39:05 crc kubenswrapper[5024]: I1128 18:39:05.809377 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:39:06 crc kubenswrapper[5024]: I1128 18:39:06.636203 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:39:06 crc kubenswrapper[5024]: I1128 18:39:06.701566 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dc8n"] Nov 28 18:39:07 crc kubenswrapper[5024]: I1128 18:39:07.596649 5024 generic.go:334] "Generic (PLEG): container finished" podID="16fce34c-83cd-4b75-8aca-5f267aa01a8c" containerID="07cf9c4eff298992655f4349da12b63066f04153d4b3755438db208a41e0be6d" exitCode=0 Nov 28 18:39:07 crc kubenswrapper[5024]: I1128 18:39:07.597760 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l89ps/crc-debug-2q2d5" event={"ID":"16fce34c-83cd-4b75-8aca-5f267aa01a8c","Type":"ContainerDied","Data":"07cf9c4eff298992655f4349da12b63066f04153d4b3755438db208a41e0be6d"} Nov 28 18:39:08 crc kubenswrapper[5024]: I1128 18:39:08.510036 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b6pvk" podUID="f2f2f22d-175d-4e45-a63b-82558fb12878" containerName="registry-server" probeResult="failure" output=< Nov 28 18:39:08 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 18:39:08 crc kubenswrapper[5024]: > Nov 28 18:39:08 crc kubenswrapper[5024]: I1128 18:39:08.606007 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9dc8n" podUID="1b8438d1-e8c8-4679-bc2b-0b220415de11" containerName="registry-server" containerID="cri-o://53d2093845728e6fbdc3941cb0d249027c6f09e73d4a2f4a47f1df67c1033036" gracePeriod=2 Nov 28 18:39:08 crc kubenswrapper[5024]: I1128 18:39:08.873793 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/crc-debug-2q2d5" Nov 28 18:39:08 crc kubenswrapper[5024]: I1128 18:39:08.920999 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxttt\" (UniqueName: \"kubernetes.io/projected/16fce34c-83cd-4b75-8aca-5f267aa01a8c-kube-api-access-cxttt\") pod \"16fce34c-83cd-4b75-8aca-5f267aa01a8c\" (UID: \"16fce34c-83cd-4b75-8aca-5f267aa01a8c\") " Nov 28 18:39:08 crc kubenswrapper[5024]: I1128 18:39:08.921330 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16fce34c-83cd-4b75-8aca-5f267aa01a8c-host\") pod \"16fce34c-83cd-4b75-8aca-5f267aa01a8c\" (UID: \"16fce34c-83cd-4b75-8aca-5f267aa01a8c\") " Nov 28 18:39:08 crc kubenswrapper[5024]: I1128 18:39:08.922192 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fce34c-83cd-4b75-8aca-5f267aa01a8c-host" (OuterVolumeSpecName: "host") pod "16fce34c-83cd-4b75-8aca-5f267aa01a8c" (UID: "16fce34c-83cd-4b75-8aca-5f267aa01a8c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 18:39:08 crc kubenswrapper[5024]: I1128 18:39:08.927287 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16fce34c-83cd-4b75-8aca-5f267aa01a8c-kube-api-access-cxttt" (OuterVolumeSpecName: "kube-api-access-cxttt") pod "16fce34c-83cd-4b75-8aca-5f267aa01a8c" (UID: "16fce34c-83cd-4b75-8aca-5f267aa01a8c"). InnerVolumeSpecName "kube-api-access-cxttt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:39:08 crc kubenswrapper[5024]: I1128 18:39:08.980121 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l89ps/crc-debug-2q2d5"] Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.020384 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l89ps/crc-debug-2q2d5"] Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.033395 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxttt\" (UniqueName: \"kubernetes.io/projected/16fce34c-83cd-4b75-8aca-5f267aa01a8c-kube-api-access-cxttt\") on node \"crc\" DevicePath \"\"" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.033427 5024 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16fce34c-83cd-4b75-8aca-5f267aa01a8c-host\") on node \"crc\" DevicePath \"\"" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.111917 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.134484 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8438d1-e8c8-4679-bc2b-0b220415de11-utilities\") pod \"1b8438d1-e8c8-4679-bc2b-0b220415de11\" (UID: \"1b8438d1-e8c8-4679-bc2b-0b220415de11\") " Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.134654 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8438d1-e8c8-4679-bc2b-0b220415de11-catalog-content\") pod \"1b8438d1-e8c8-4679-bc2b-0b220415de11\" (UID: \"1b8438d1-e8c8-4679-bc2b-0b220415de11\") " Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.134879 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw6c6\" (UniqueName: \"kubernetes.io/projected/1b8438d1-e8c8-4679-bc2b-0b220415de11-kube-api-access-hw6c6\") pod \"1b8438d1-e8c8-4679-bc2b-0b220415de11\" (UID: \"1b8438d1-e8c8-4679-bc2b-0b220415de11\") " Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.135930 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b8438d1-e8c8-4679-bc2b-0b220415de11-utilities" (OuterVolumeSpecName: "utilities") pod "1b8438d1-e8c8-4679-bc2b-0b220415de11" (UID: "1b8438d1-e8c8-4679-bc2b-0b220415de11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.139501 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b8438d1-e8c8-4679-bc2b-0b220415de11-kube-api-access-hw6c6" (OuterVolumeSpecName: "kube-api-access-hw6c6") pod "1b8438d1-e8c8-4679-bc2b-0b220415de11" (UID: "1b8438d1-e8c8-4679-bc2b-0b220415de11"). InnerVolumeSpecName "kube-api-access-hw6c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.216267 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b8438d1-e8c8-4679-bc2b-0b220415de11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b8438d1-e8c8-4679-bc2b-0b220415de11" (UID: "1b8438d1-e8c8-4679-bc2b-0b220415de11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.237489 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b8438d1-e8c8-4679-bc2b-0b220415de11-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.237525 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw6c6\" (UniqueName: \"kubernetes.io/projected/1b8438d1-e8c8-4679-bc2b-0b220415de11-kube-api-access-hw6c6\") on node \"crc\" DevicePath \"\"" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.237536 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b8438d1-e8c8-4679-bc2b-0b220415de11-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.638462 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4454a2fd27787708da74f65f79554a73de98092c1027b1de2f1cf963b90d957f" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.638483 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/crc-debug-2q2d5" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.646807 5024 generic.go:334] "Generic (PLEG): container finished" podID="1b8438d1-e8c8-4679-bc2b-0b220415de11" containerID="53d2093845728e6fbdc3941cb0d249027c6f09e73d4a2f4a47f1df67c1033036" exitCode=0 Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.646856 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dc8n" event={"ID":"1b8438d1-e8c8-4679-bc2b-0b220415de11","Type":"ContainerDied","Data":"53d2093845728e6fbdc3941cb0d249027c6f09e73d4a2f4a47f1df67c1033036"} Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.646886 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dc8n" event={"ID":"1b8438d1-e8c8-4679-bc2b-0b220415de11","Type":"ContainerDied","Data":"e23b280a839790cf59b5fd602a2bf78d7c552439343958c1392d1d137caf612f"} Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.646906 5024 scope.go:117] "RemoveContainer" containerID="53d2093845728e6fbdc3941cb0d249027c6f09e73d4a2f4a47f1df67c1033036" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.647091 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dc8n" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.718317 5024 scope.go:117] "RemoveContainer" containerID="dc43d881459d63d4b2825da384487472a3ed955652cfd40bf30d116b2913b36b" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.738178 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dc8n"] Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.751737 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9dc8n"] Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.763957 5024 scope.go:117] "RemoveContainer" containerID="c914974ff34c578863776fc299ce9276b63485ebf066b45120d86a5a3089faea" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.803850 5024 scope.go:117] "RemoveContainer" containerID="53d2093845728e6fbdc3941cb0d249027c6f09e73d4a2f4a47f1df67c1033036" Nov 28 18:39:09 crc kubenswrapper[5024]: E1128 18:39:09.804792 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53d2093845728e6fbdc3941cb0d249027c6f09e73d4a2f4a47f1df67c1033036\": container with ID starting with 53d2093845728e6fbdc3941cb0d249027c6f09e73d4a2f4a47f1df67c1033036 not found: ID does not exist" containerID="53d2093845728e6fbdc3941cb0d249027c6f09e73d4a2f4a47f1df67c1033036" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.804864 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53d2093845728e6fbdc3941cb0d249027c6f09e73d4a2f4a47f1df67c1033036"} err="failed to get container status \"53d2093845728e6fbdc3941cb0d249027c6f09e73d4a2f4a47f1df67c1033036\": rpc error: code = NotFound desc = could not find container \"53d2093845728e6fbdc3941cb0d249027c6f09e73d4a2f4a47f1df67c1033036\": container with ID starting with 53d2093845728e6fbdc3941cb0d249027c6f09e73d4a2f4a47f1df67c1033036 not found: ID does not exist" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.804905 5024 scope.go:117] "RemoveContainer" containerID="dc43d881459d63d4b2825da384487472a3ed955652cfd40bf30d116b2913b36b" Nov 28 18:39:09 crc kubenswrapper[5024]: E1128 18:39:09.805379 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc43d881459d63d4b2825da384487472a3ed955652cfd40bf30d116b2913b36b\": container with ID starting with dc43d881459d63d4b2825da384487472a3ed955652cfd40bf30d116b2913b36b not found: ID does not exist" containerID="dc43d881459d63d4b2825da384487472a3ed955652cfd40bf30d116b2913b36b" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.805415 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc43d881459d63d4b2825da384487472a3ed955652cfd40bf30d116b2913b36b"} err="failed to get container status \"dc43d881459d63d4b2825da384487472a3ed955652cfd40bf30d116b2913b36b\": rpc error: code = NotFound desc = could not find container \"dc43d881459d63d4b2825da384487472a3ed955652cfd40bf30d116b2913b36b\": container with ID starting with dc43d881459d63d4b2825da384487472a3ed955652cfd40bf30d116b2913b36b not found: ID does not exist" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.805441 5024 scope.go:117] "RemoveContainer" containerID="c914974ff34c578863776fc299ce9276b63485ebf066b45120d86a5a3089faea" Nov 28 18:39:09 crc kubenswrapper[5024]: E1128 18:39:09.805858 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c914974ff34c578863776fc299ce9276b63485ebf066b45120d86a5a3089faea\": container with ID starting with c914974ff34c578863776fc299ce9276b63485ebf066b45120d86a5a3089faea not found: ID does not exist" containerID="c914974ff34c578863776fc299ce9276b63485ebf066b45120d86a5a3089faea" Nov 28 18:39:09 crc kubenswrapper[5024]: I1128 18:39:09.805948 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c914974ff34c578863776fc299ce9276b63485ebf066b45120d86a5a3089faea"} err="failed to get container status \"c914974ff34c578863776fc299ce9276b63485ebf066b45120d86a5a3089faea\": rpc error: code = NotFound desc = could not find container \"c914974ff34c578863776fc299ce9276b63485ebf066b45120d86a5a3089faea\": container with ID starting with c914974ff34c578863776fc299ce9276b63485ebf066b45120d86a5a3089faea not found: ID does not exist" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.170255 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l89ps/crc-debug-pm8s8"] Nov 28 18:39:10 crc kubenswrapper[5024]: E1128 18:39:10.171007 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fce34c-83cd-4b75-8aca-5f267aa01a8c" containerName="container-00" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.174004 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fce34c-83cd-4b75-8aca-5f267aa01a8c" containerName="container-00" Nov 28 18:39:10 crc kubenswrapper[5024]: E1128 18:39:10.174035 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8438d1-e8c8-4679-bc2b-0b220415de11" containerName="registry-server" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.174043 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8438d1-e8c8-4679-bc2b-0b220415de11" containerName="registry-server" Nov 28 18:39:10 crc kubenswrapper[5024]: E1128 18:39:10.174058 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8438d1-e8c8-4679-bc2b-0b220415de11" containerName="extract-utilities" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.174071 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8438d1-e8c8-4679-bc2b-0b220415de11" containerName="extract-utilities" Nov 28 18:39:10 crc kubenswrapper[5024]: E1128 18:39:10.174096 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8438d1-e8c8-4679-bc2b-0b220415de11" containerName="extract-content" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.174101 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8438d1-e8c8-4679-bc2b-0b220415de11" containerName="extract-content" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.174385 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b8438d1-e8c8-4679-bc2b-0b220415de11" containerName="registry-server" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.174408 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="16fce34c-83cd-4b75-8aca-5f267aa01a8c" containerName="container-00" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.175238 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/crc-debug-pm8s8" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.262464 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwlzg\" (UniqueName: \"kubernetes.io/projected/c7433735-73a0-48b0-9650-1f5f2e696ed6-kube-api-access-dwlzg\") pod \"crc-debug-pm8s8\" (UID: \"c7433735-73a0-48b0-9650-1f5f2e696ed6\") " pod="openshift-must-gather-l89ps/crc-debug-pm8s8" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.262827 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7433735-73a0-48b0-9650-1f5f2e696ed6-host\") pod \"crc-debug-pm8s8\" (UID: \"c7433735-73a0-48b0-9650-1f5f2e696ed6\") " pod="openshift-must-gather-l89ps/crc-debug-pm8s8" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.365939 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwlzg\" (UniqueName: \"kubernetes.io/projected/c7433735-73a0-48b0-9650-1f5f2e696ed6-kube-api-access-dwlzg\") pod \"crc-debug-pm8s8\" (UID: \"c7433735-73a0-48b0-9650-1f5f2e696ed6\") " pod="openshift-must-gather-l89ps/crc-debug-pm8s8" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.366061 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7433735-73a0-48b0-9650-1f5f2e696ed6-host\") pod \"crc-debug-pm8s8\" (UID: \"c7433735-73a0-48b0-9650-1f5f2e696ed6\") " pod="openshift-must-gather-l89ps/crc-debug-pm8s8" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.366276 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7433735-73a0-48b0-9650-1f5f2e696ed6-host\") pod \"crc-debug-pm8s8\" (UID: \"c7433735-73a0-48b0-9650-1f5f2e696ed6\") " pod="openshift-must-gather-l89ps/crc-debug-pm8s8" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.385816 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwlzg\" (UniqueName: \"kubernetes.io/projected/c7433735-73a0-48b0-9650-1f5f2e696ed6-kube-api-access-dwlzg\") pod \"crc-debug-pm8s8\" (UID: \"c7433735-73a0-48b0-9650-1f5f2e696ed6\") " pod="openshift-must-gather-l89ps/crc-debug-pm8s8" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.492423 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/crc-debug-pm8s8" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.536751 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16fce34c-83cd-4b75-8aca-5f267aa01a8c" path="/var/lib/kubelet/pods/16fce34c-83cd-4b75-8aca-5f267aa01a8c/volumes" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.537462 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b8438d1-e8c8-4679-bc2b-0b220415de11" path="/var/lib/kubelet/pods/1b8438d1-e8c8-4679-bc2b-0b220415de11/volumes" Nov 28 18:39:10 crc kubenswrapper[5024]: I1128 18:39:10.750288 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l89ps/crc-debug-pm8s8" event={"ID":"c7433735-73a0-48b0-9650-1f5f2e696ed6","Type":"ContainerStarted","Data":"d542e014ee63db185d8103b6b4ebea0e1a238ef0acca6ff5e3b666c915bc3839"} Nov 28 18:39:11 crc kubenswrapper[5024]: I1128 18:39:11.764706 5024 generic.go:334] "Generic (PLEG): container finished" podID="c7433735-73a0-48b0-9650-1f5f2e696ed6" containerID="4100caf407d9a6904d6b4a273a778305405a8b19c7106d4b10f0895b18c742f2" exitCode=0 Nov 28 18:39:11 crc kubenswrapper[5024]: I1128 18:39:11.764801 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l89ps/crc-debug-pm8s8" event={"ID":"c7433735-73a0-48b0-9650-1f5f2e696ed6","Type":"ContainerDied","Data":"4100caf407d9a6904d6b4a273a778305405a8b19c7106d4b10f0895b18c742f2"} Nov 28 18:39:12 crc kubenswrapper[5024]: I1128 18:39:12.956991 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/crc-debug-pm8s8" Nov 28 18:39:13 crc kubenswrapper[5024]: I1128 18:39:13.137592 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwlzg\" (UniqueName: \"kubernetes.io/projected/c7433735-73a0-48b0-9650-1f5f2e696ed6-kube-api-access-dwlzg\") pod \"c7433735-73a0-48b0-9650-1f5f2e696ed6\" (UID: \"c7433735-73a0-48b0-9650-1f5f2e696ed6\") " Nov 28 18:39:13 crc kubenswrapper[5024]: I1128 18:39:13.138235 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7433735-73a0-48b0-9650-1f5f2e696ed6-host\") pod \"c7433735-73a0-48b0-9650-1f5f2e696ed6\" (UID: \"c7433735-73a0-48b0-9650-1f5f2e696ed6\") " Nov 28 18:39:13 crc kubenswrapper[5024]: I1128 18:39:13.139544 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7433735-73a0-48b0-9650-1f5f2e696ed6-host" (OuterVolumeSpecName: "host") pod "c7433735-73a0-48b0-9650-1f5f2e696ed6" (UID: "c7433735-73a0-48b0-9650-1f5f2e696ed6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 18:39:13 crc kubenswrapper[5024]: I1128 18:39:13.149446 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7433735-73a0-48b0-9650-1f5f2e696ed6-kube-api-access-dwlzg" (OuterVolumeSpecName: "kube-api-access-dwlzg") pod "c7433735-73a0-48b0-9650-1f5f2e696ed6" (UID: "c7433735-73a0-48b0-9650-1f5f2e696ed6"). InnerVolumeSpecName "kube-api-access-dwlzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:39:13 crc kubenswrapper[5024]: I1128 18:39:13.241277 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwlzg\" (UniqueName: \"kubernetes.io/projected/c7433735-73a0-48b0-9650-1f5f2e696ed6-kube-api-access-dwlzg\") on node \"crc\" DevicePath \"\"" Nov 28 18:39:13 crc kubenswrapper[5024]: I1128 18:39:13.241305 5024 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7433735-73a0-48b0-9650-1f5f2e696ed6-host\") on node \"crc\" DevicePath \"\"" Nov 28 18:39:13 crc kubenswrapper[5024]: I1128 18:39:13.808872 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l89ps/crc-debug-pm8s8" event={"ID":"c7433735-73a0-48b0-9650-1f5f2e696ed6","Type":"ContainerDied","Data":"d542e014ee63db185d8103b6b4ebea0e1a238ef0acca6ff5e3b666c915bc3839"} Nov 28 18:39:13 crc kubenswrapper[5024]: I1128 18:39:13.809682 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d542e014ee63db185d8103b6b4ebea0e1a238ef0acca6ff5e3b666c915bc3839" Nov 28 18:39:13 crc kubenswrapper[5024]: I1128 18:39:13.809806 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/crc-debug-pm8s8" Nov 28 18:39:14 crc kubenswrapper[5024]: I1128 18:39:14.111952 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l89ps/crc-debug-pm8s8"] Nov 28 18:39:14 crc kubenswrapper[5024]: I1128 18:39:14.123233 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l89ps/crc-debug-pm8s8"] Nov 28 18:39:14 crc kubenswrapper[5024]: I1128 18:39:14.517171 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7433735-73a0-48b0-9650-1f5f2e696ed6" path="/var/lib/kubelet/pods/c7433735-73a0-48b0-9650-1f5f2e696ed6/volumes" Nov 28 18:39:15 crc kubenswrapper[5024]: I1128 18:39:15.514636 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l89ps/crc-debug-rf9d4"] Nov 28 18:39:15 crc kubenswrapper[5024]: E1128 18:39:15.515378 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7433735-73a0-48b0-9650-1f5f2e696ed6" containerName="container-00" Nov 28 18:39:15 crc kubenswrapper[5024]: I1128 18:39:15.515392 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7433735-73a0-48b0-9650-1f5f2e696ed6" containerName="container-00" Nov 28 18:39:15 crc kubenswrapper[5024]: I1128 18:39:15.515639 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7433735-73a0-48b0-9650-1f5f2e696ed6" containerName="container-00" Nov 28 18:39:15 crc kubenswrapper[5024]: I1128 18:39:15.516491 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/crc-debug-rf9d4" Nov 28 18:39:15 crc kubenswrapper[5024]: I1128 18:39:15.595044 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21c567f4-82e2-43d6-b357-ba5ac7f30cbe-host\") pod \"crc-debug-rf9d4\" (UID: \"21c567f4-82e2-43d6-b357-ba5ac7f30cbe\") " pod="openshift-must-gather-l89ps/crc-debug-rf9d4" Nov 28 18:39:15 crc kubenswrapper[5024]: I1128 18:39:15.595174 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcpnh\" (UniqueName: \"kubernetes.io/projected/21c567f4-82e2-43d6-b357-ba5ac7f30cbe-kube-api-access-dcpnh\") pod \"crc-debug-rf9d4\" (UID: \"21c567f4-82e2-43d6-b357-ba5ac7f30cbe\") " pod="openshift-must-gather-l89ps/crc-debug-rf9d4" Nov 28 18:39:15 crc kubenswrapper[5024]: I1128 18:39:15.697233 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcpnh\" (UniqueName: \"kubernetes.io/projected/21c567f4-82e2-43d6-b357-ba5ac7f30cbe-kube-api-access-dcpnh\") pod \"crc-debug-rf9d4\" (UID: \"21c567f4-82e2-43d6-b357-ba5ac7f30cbe\") " pod="openshift-must-gather-l89ps/crc-debug-rf9d4" Nov 28 18:39:15 crc kubenswrapper[5024]: I1128 18:39:15.697712 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21c567f4-82e2-43d6-b357-ba5ac7f30cbe-host\") pod \"crc-debug-rf9d4\" (UID: \"21c567f4-82e2-43d6-b357-ba5ac7f30cbe\") " pod="openshift-must-gather-l89ps/crc-debug-rf9d4" Nov 28 18:39:15 crc kubenswrapper[5024]: I1128 18:39:15.697937 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21c567f4-82e2-43d6-b357-ba5ac7f30cbe-host\") pod \"crc-debug-rf9d4\" (UID: \"21c567f4-82e2-43d6-b357-ba5ac7f30cbe\") " pod="openshift-must-gather-l89ps/crc-debug-rf9d4" Nov 28 18:39:15 crc kubenswrapper[5024]: I1128 18:39:15.716438 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcpnh\" (UniqueName: \"kubernetes.io/projected/21c567f4-82e2-43d6-b357-ba5ac7f30cbe-kube-api-access-dcpnh\") pod \"crc-debug-rf9d4\" (UID: \"21c567f4-82e2-43d6-b357-ba5ac7f30cbe\") " pod="openshift-must-gather-l89ps/crc-debug-rf9d4" Nov 28 18:39:15 crc kubenswrapper[5024]: I1128 18:39:15.836680 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/crc-debug-rf9d4" Nov 28 18:39:15 crc kubenswrapper[5024]: W1128 18:39:15.869149 5024 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21c567f4_82e2_43d6_b357_ba5ac7f30cbe.slice/crio-7bf52204a9c7b82d4ab0f38de0d99960eb91fde4f8fd83b4470db2bdf737e143 WatchSource:0}: Error finding container 7bf52204a9c7b82d4ab0f38de0d99960eb91fde4f8fd83b4470db2bdf737e143: Status 404 returned error can't find the container with id 7bf52204a9c7b82d4ab0f38de0d99960eb91fde4f8fd83b4470db2bdf737e143 Nov 28 18:39:16 crc kubenswrapper[5024]: I1128 18:39:16.841802 5024 generic.go:334] "Generic (PLEG): container finished" podID="21c567f4-82e2-43d6-b357-ba5ac7f30cbe" containerID="bb8165a40a9227157bab75017640e6e8e344eda3109a1d6f6cbc7f4fdb094628" exitCode=0 Nov 28 18:39:16 crc kubenswrapper[5024]: I1128 18:39:16.841884 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l89ps/crc-debug-rf9d4" event={"ID":"21c567f4-82e2-43d6-b357-ba5ac7f30cbe","Type":"ContainerDied","Data":"bb8165a40a9227157bab75017640e6e8e344eda3109a1d6f6cbc7f4fdb094628"} Nov 28 18:39:16 crc kubenswrapper[5024]: I1128 18:39:16.843328 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l89ps/crc-debug-rf9d4" event={"ID":"21c567f4-82e2-43d6-b357-ba5ac7f30cbe","Type":"ContainerStarted","Data":"7bf52204a9c7b82d4ab0f38de0d99960eb91fde4f8fd83b4470db2bdf737e143"} Nov 28 18:39:16 crc kubenswrapper[5024]: I1128 18:39:16.924677 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l89ps/crc-debug-rf9d4"] Nov 28 18:39:16 crc kubenswrapper[5024]: I1128 18:39:16.936237 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l89ps/crc-debug-rf9d4"] Nov 28 18:39:17 crc kubenswrapper[5024]: I1128 18:39:17.498571 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:39:17 crc kubenswrapper[5024]: E1128 18:39:17.498909 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:39:17 crc kubenswrapper[5024]: I1128 18:39:17.528522 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:39:17 crc kubenswrapper[5024]: I1128 18:39:17.583842 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:39:18 crc kubenswrapper[5024]: I1128 18:39:18.009827 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/crc-debug-rf9d4" Nov 28 18:39:18 crc kubenswrapper[5024]: I1128 18:39:18.163404 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcpnh\" (UniqueName: \"kubernetes.io/projected/21c567f4-82e2-43d6-b357-ba5ac7f30cbe-kube-api-access-dcpnh\") pod \"21c567f4-82e2-43d6-b357-ba5ac7f30cbe\" (UID: \"21c567f4-82e2-43d6-b357-ba5ac7f30cbe\") " Nov 28 18:39:18 crc kubenswrapper[5024]: I1128 18:39:18.164827 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21c567f4-82e2-43d6-b357-ba5ac7f30cbe-host\") pod \"21c567f4-82e2-43d6-b357-ba5ac7f30cbe\" (UID: \"21c567f4-82e2-43d6-b357-ba5ac7f30cbe\") " Nov 28 18:39:18 crc kubenswrapper[5024]: I1128 18:39:18.165484 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21c567f4-82e2-43d6-b357-ba5ac7f30cbe-host" (OuterVolumeSpecName: "host") pod "21c567f4-82e2-43d6-b357-ba5ac7f30cbe" (UID: "21c567f4-82e2-43d6-b357-ba5ac7f30cbe"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 18:39:18 crc kubenswrapper[5024]: I1128 18:39:18.166122 5024 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21c567f4-82e2-43d6-b357-ba5ac7f30cbe-host\") on node \"crc\" DevicePath \"\"" Nov 28 18:39:18 crc kubenswrapper[5024]: I1128 18:39:18.170365 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c567f4-82e2-43d6-b357-ba5ac7f30cbe-kube-api-access-dcpnh" (OuterVolumeSpecName: "kube-api-access-dcpnh") pod "21c567f4-82e2-43d6-b357-ba5ac7f30cbe" (UID: "21c567f4-82e2-43d6-b357-ba5ac7f30cbe"). InnerVolumeSpecName "kube-api-access-dcpnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:39:18 crc kubenswrapper[5024]: I1128 18:39:18.260445 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b6pvk"] Nov 28 18:39:18 crc kubenswrapper[5024]: I1128 18:39:18.267886 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcpnh\" (UniqueName: \"kubernetes.io/projected/21c567f4-82e2-43d6-b357-ba5ac7f30cbe-kube-api-access-dcpnh\") on node \"crc\" DevicePath \"\"" Nov 28 18:39:18 crc kubenswrapper[5024]: I1128 18:39:18.516695 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c567f4-82e2-43d6-b357-ba5ac7f30cbe" path="/var/lib/kubelet/pods/21c567f4-82e2-43d6-b357-ba5ac7f30cbe/volumes" Nov 28 18:39:18 crc kubenswrapper[5024]: I1128 18:39:18.863812 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b6pvk" podUID="f2f2f22d-175d-4e45-a63b-82558fb12878" containerName="registry-server" containerID="cri-o://3ec872d9700d9a2d001ca5e5f7caee950fd2690203b80629e9a3771a0122d508" gracePeriod=2 Nov 28 18:39:18 crc kubenswrapper[5024]: I1128 18:39:18.864499 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/crc-debug-rf9d4" Nov 28 18:39:18 crc kubenswrapper[5024]: I1128 18:39:18.865465 5024 scope.go:117] "RemoveContainer" containerID="bb8165a40a9227157bab75017640e6e8e344eda3109a1d6f6cbc7f4fdb094628" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.513830 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.597947 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f2f22d-175d-4e45-a63b-82558fb12878-utilities\") pod \"f2f2f22d-175d-4e45-a63b-82558fb12878\" (UID: \"f2f2f22d-175d-4e45-a63b-82558fb12878\") " Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.598492 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f2f22d-175d-4e45-a63b-82558fb12878-catalog-content\") pod \"f2f2f22d-175d-4e45-a63b-82558fb12878\" (UID: \"f2f2f22d-175d-4e45-a63b-82558fb12878\") " Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.598545 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg5gm\" (UniqueName: \"kubernetes.io/projected/f2f2f22d-175d-4e45-a63b-82558fb12878-kube-api-access-hg5gm\") pod \"f2f2f22d-175d-4e45-a63b-82558fb12878\" (UID: \"f2f2f22d-175d-4e45-a63b-82558fb12878\") " Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.598967 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2f2f22d-175d-4e45-a63b-82558fb12878-utilities" (OuterVolumeSpecName: "utilities") pod "f2f2f22d-175d-4e45-a63b-82558fb12878" (UID: "f2f2f22d-175d-4e45-a63b-82558fb12878"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.599944 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f2f22d-175d-4e45-a63b-82558fb12878-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.607369 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2f2f22d-175d-4e45-a63b-82558fb12878-kube-api-access-hg5gm" (OuterVolumeSpecName: "kube-api-access-hg5gm") pod "f2f2f22d-175d-4e45-a63b-82558fb12878" (UID: "f2f2f22d-175d-4e45-a63b-82558fb12878"). InnerVolumeSpecName "kube-api-access-hg5gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.702199 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg5gm\" (UniqueName: \"kubernetes.io/projected/f2f2f22d-175d-4e45-a63b-82558fb12878-kube-api-access-hg5gm\") on node \"crc\" DevicePath \"\"" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.778821 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2f2f22d-175d-4e45-a63b-82558fb12878-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2f2f22d-175d-4e45-a63b-82558fb12878" (UID: "f2f2f22d-175d-4e45-a63b-82558fb12878"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.804523 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f2f22d-175d-4e45-a63b-82558fb12878-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.878624 5024 generic.go:334] "Generic (PLEG): container finished" podID="f2f2f22d-175d-4e45-a63b-82558fb12878" containerID="3ec872d9700d9a2d001ca5e5f7caee950fd2690203b80629e9a3771a0122d508" exitCode=0 Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.878710 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6pvk" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.878667 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6pvk" event={"ID":"f2f2f22d-175d-4e45-a63b-82558fb12878","Type":"ContainerDied","Data":"3ec872d9700d9a2d001ca5e5f7caee950fd2690203b80629e9a3771a0122d508"} Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.879204 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6pvk" event={"ID":"f2f2f22d-175d-4e45-a63b-82558fb12878","Type":"ContainerDied","Data":"50cab107873e76b931e0859fa47910d3d82717699725daf3f57900f6ae27a7b7"} Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.879261 5024 scope.go:117] "RemoveContainer" containerID="3ec872d9700d9a2d001ca5e5f7caee950fd2690203b80629e9a3771a0122d508" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.904133 5024 scope.go:117] "RemoveContainer" containerID="1d4c0aa37341b7bf29efd27ddff6a717cb3bab2ed3dad41aa135a3fb28a0f223" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.920638 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b6pvk"] Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.932475 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b6pvk"] Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.945765 5024 scope.go:117] "RemoveContainer" containerID="2b602bd10b21516691674c969563d6c0f881132aeb6d9a850784c740dc965cee" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.997947 5024 scope.go:117] "RemoveContainer" containerID="3ec872d9700d9a2d001ca5e5f7caee950fd2690203b80629e9a3771a0122d508" Nov 28 18:39:19 crc kubenswrapper[5024]: E1128 18:39:19.998572 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ec872d9700d9a2d001ca5e5f7caee950fd2690203b80629e9a3771a0122d508\": container with ID starting with 3ec872d9700d9a2d001ca5e5f7caee950fd2690203b80629e9a3771a0122d508 not found: ID does not exist" containerID="3ec872d9700d9a2d001ca5e5f7caee950fd2690203b80629e9a3771a0122d508" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.998634 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ec872d9700d9a2d001ca5e5f7caee950fd2690203b80629e9a3771a0122d508"} err="failed to get container status \"3ec872d9700d9a2d001ca5e5f7caee950fd2690203b80629e9a3771a0122d508\": rpc error: code = NotFound desc = could not find container \"3ec872d9700d9a2d001ca5e5f7caee950fd2690203b80629e9a3771a0122d508\": container with ID starting with 3ec872d9700d9a2d001ca5e5f7caee950fd2690203b80629e9a3771a0122d508 not found: ID does not exist" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.998669 5024 scope.go:117] "RemoveContainer" containerID="1d4c0aa37341b7bf29efd27ddff6a717cb3bab2ed3dad41aa135a3fb28a0f223" Nov 28 18:39:19 crc kubenswrapper[5024]: E1128 18:39:19.999450 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4c0aa37341b7bf29efd27ddff6a717cb3bab2ed3dad41aa135a3fb28a0f223\": container with ID starting with 1d4c0aa37341b7bf29efd27ddff6a717cb3bab2ed3dad41aa135a3fb28a0f223 not found: ID does not exist" containerID="1d4c0aa37341b7bf29efd27ddff6a717cb3bab2ed3dad41aa135a3fb28a0f223" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.999488 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4c0aa37341b7bf29efd27ddff6a717cb3bab2ed3dad41aa135a3fb28a0f223"} err="failed to get container status \"1d4c0aa37341b7bf29efd27ddff6a717cb3bab2ed3dad41aa135a3fb28a0f223\": rpc error: code = NotFound desc = could not find container \"1d4c0aa37341b7bf29efd27ddff6a717cb3bab2ed3dad41aa135a3fb28a0f223\": container with ID starting with 1d4c0aa37341b7bf29efd27ddff6a717cb3bab2ed3dad41aa135a3fb28a0f223 not found: ID does not exist" Nov 28 18:39:19 crc kubenswrapper[5024]: I1128 18:39:19.999516 5024 scope.go:117] "RemoveContainer" containerID="2b602bd10b21516691674c969563d6c0f881132aeb6d9a850784c740dc965cee" Nov 28 18:39:20 crc kubenswrapper[5024]: E1128 18:39:19.999992 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b602bd10b21516691674c969563d6c0f881132aeb6d9a850784c740dc965cee\": container with ID starting with 2b602bd10b21516691674c969563d6c0f881132aeb6d9a850784c740dc965cee not found: ID does not exist" containerID="2b602bd10b21516691674c969563d6c0f881132aeb6d9a850784c740dc965cee" Nov 28 18:39:20 crc kubenswrapper[5024]: I1128 18:39:20.000050 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b602bd10b21516691674c969563d6c0f881132aeb6d9a850784c740dc965cee"} err="failed to get container status \"2b602bd10b21516691674c969563d6c0f881132aeb6d9a850784c740dc965cee\": rpc error: code = NotFound desc = could not find container \"2b602bd10b21516691674c969563d6c0f881132aeb6d9a850784c740dc965cee\": container with ID starting with 2b602bd10b21516691674c969563d6c0f881132aeb6d9a850784c740dc965cee not found: ID does not exist" Nov 28 18:39:20 crc kubenswrapper[5024]: I1128 18:39:20.511650 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2f2f22d-175d-4e45-a63b-82558fb12878" path="/var/lib/kubelet/pods/f2f2f22d-175d-4e45-a63b-82558fb12878/volumes" Nov 28 18:39:28 crc kubenswrapper[5024]: I1128 18:39:28.507463 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:39:28 crc kubenswrapper[5024]: E1128 18:39:28.508475 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:39:41 crc kubenswrapper[5024]: I1128 18:39:41.498745 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:39:42 crc kubenswrapper[5024]: I1128 18:39:42.143226 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"341f4ee9b8cbef62b36434f9d731f94ba6dabce1bdd9d060f4ec6256f9507c7c"} Nov 28 18:39:44 crc kubenswrapper[5024]: I1128 18:39:44.534866 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_3363602b-4e31-4813-b443-e8bc9468059c/aodh-api/0.log" Nov 28 18:39:44 crc kubenswrapper[5024]: I1128 18:39:44.658317 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_3363602b-4e31-4813-b443-e8bc9468059c/aodh-evaluator/0.log" Nov 28 18:39:44 crc kubenswrapper[5024]: I1128 18:39:44.767965 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_3363602b-4e31-4813-b443-e8bc9468059c/aodh-notifier/0.log" Nov 28 18:39:44 crc kubenswrapper[5024]: I1128 18:39:44.785690 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_3363602b-4e31-4813-b443-e8bc9468059c/aodh-listener/0.log" Nov 28 18:39:44 crc kubenswrapper[5024]: I1128 18:39:44.951249 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-56d8644854-9v4h9_fe4994f3-49cb-4dda-957d-8deb244949e7/barbican-api/0.log" Nov 28 18:39:44 crc kubenswrapper[5024]: I1128 18:39:44.983833 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-56d8644854-9v4h9_fe4994f3-49cb-4dda-957d-8deb244949e7/barbican-api-log/0.log" Nov 28 18:39:45 crc kubenswrapper[5024]: I1128 18:39:45.072848 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6bd9bb486-bbh5j_bd2c11b3-5ebf-4225-9082-40859af5a480/barbican-keystone-listener/0.log" Nov 28 18:39:45 crc kubenswrapper[5024]: I1128 18:39:45.263388 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6bd9bb486-bbh5j_bd2c11b3-5ebf-4225-9082-40859af5a480/barbican-keystone-listener-log/0.log" Nov 28 18:39:45 crc kubenswrapper[5024]: I1128 18:39:45.287678 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6d88cbb66c-lp6ws_a957805d-e8d1-45ac-890f-23ae1e98516a/barbican-worker-log/0.log" Nov 28 18:39:45 crc kubenswrapper[5024]: I1128 18:39:45.315432 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6d88cbb66c-lp6ws_a957805d-e8d1-45ac-890f-23ae1e98516a/barbican-worker/0.log" Nov 28 18:39:45 crc kubenswrapper[5024]: I1128 18:39:45.517756 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-jlhwc_c2e066c9-5f85-4782-9317-546bcc3457e8/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:45 crc kubenswrapper[5024]: I1128 18:39:45.611459 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_446db982-05e3-4131-aaf7-07e42b726565/ceilometer-central-agent/0.log" Nov 28 18:39:45 crc kubenswrapper[5024]: I1128 18:39:45.748579 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_446db982-05e3-4131-aaf7-07e42b726565/proxy-httpd/0.log" Nov 28 18:39:45 crc kubenswrapper[5024]: I1128 18:39:45.824091 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_446db982-05e3-4131-aaf7-07e42b726565/ceilometer-notification-agent/0.log" Nov 28 18:39:45 crc kubenswrapper[5024]: I1128 18:39:45.830622 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_446db982-05e3-4131-aaf7-07e42b726565/sg-core/0.log" Nov 28 18:39:46 crc kubenswrapper[5024]: I1128 18:39:46.024597 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f9ace56d-5740-45f2-b8ac-04c2ed9b4270/cinder-api-log/0.log" Nov 28 18:39:46 crc kubenswrapper[5024]: I1128 18:39:46.052139 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f9ace56d-5740-45f2-b8ac-04c2ed9b4270/cinder-api/0.log" Nov 28 18:39:46 crc kubenswrapper[5024]: I1128 18:39:46.249336 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9d67a24d-c44f-46a8-b24c-ac9ddb765f0f/cinder-scheduler/0.log" Nov 28 18:39:46 crc kubenswrapper[5024]: I1128 18:39:46.515108 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9d67a24d-c44f-46a8-b24c-ac9ddb765f0f/probe/0.log" Nov 28 18:39:46 crc kubenswrapper[5024]: I1128 18:39:46.583372 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-pxt2b_f92e6a57-6a9f-4020-86d0-298a7bf3ad71/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:46 crc kubenswrapper[5024]: I1128 18:39:46.786924 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-gbngz_b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1/init/0.log" Nov 28 18:39:46 crc kubenswrapper[5024]: I1128 18:39:46.795130 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6fxn7_acf50993-28ae-470e-a987-d19f7f609d59/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:47 crc kubenswrapper[5024]: I1128 18:39:47.034003 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-gbngz_b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1/init/0.log" Nov 28 18:39:47 crc kubenswrapper[5024]: I1128 18:39:47.088526 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-q9m9v_08a39720-1020-466e-9226-0257994b642f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:47 crc kubenswrapper[5024]: I1128 18:39:47.090712 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-gbngz_b9a5c65a-9917-497c-9a75-ce5ccf0a6ed1/dnsmasq-dns/0.log" Nov 28 18:39:47 crc kubenswrapper[5024]: I1128 18:39:47.343379 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f72463ea-b813-4303-bd6a-78c55da993de/glance-log/0.log" Nov 28 18:39:47 crc kubenswrapper[5024]: I1128 18:39:47.348991 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f72463ea-b813-4303-bd6a-78c55da993de/glance-httpd/0.log" Nov 28 18:39:47 crc kubenswrapper[5024]: I1128 18:39:47.571230 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1642d7d9-4b46-4214-9d51-c3f2681b3f35/glance-httpd/0.log" Nov 28 18:39:47 crc kubenswrapper[5024]: I1128 18:39:47.603504 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1642d7d9-4b46-4214-9d51-c3f2681b3f35/glance-log/0.log" Nov 28 18:39:48 crc kubenswrapper[5024]: I1128 18:39:48.140317 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5c7c65bb6d-4vg66_8ddba23c-0074-409a-b5c1-fd147c402317/heat-engine/0.log" Nov 28 18:39:48 crc kubenswrapper[5024]: I1128 18:39:48.402536 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-6hqx2_9bb41f70-f26c-4ca8-8953-0dad03b77a6a/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:48 crc kubenswrapper[5024]: I1128 18:39:48.562831 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-2mxf4_08090ae1-dcb4-4744-8650-c56fcdb30575/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:48 crc kubenswrapper[5024]: I1128 18:39:48.596268 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7c7f65cbb-fvsgt_4f741e4f-1722-4cea-9fdf-2f93fd734983/heat-api/0.log" Nov 28 18:39:48 crc kubenswrapper[5024]: I1128 18:39:48.621887 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-bc8bb8756-2wm58_39ee04ed-749f-4912-ae06-7feea922da25/heat-cfnapi/0.log" Nov 28 18:39:48 crc kubenswrapper[5024]: I1128 18:39:48.847878 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29405881-9766r_42a7d1a5-e99a-47e1-aeb7-20974f1a50a1/keystone-cron/0.log" Nov 28 18:39:48 crc kubenswrapper[5024]: I1128 18:39:48.912263 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c38186f7-7448-4cdd-8f18-0336385c33ad/kube-state-metrics/0.log" Nov 28 18:39:49 crc kubenswrapper[5024]: I1128 18:39:49.172344 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-2vlhx_0c74575c-09fd-4190-9781-0e1e98d85d85/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:49 crc kubenswrapper[5024]: I1128 18:39:49.204622 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-54f6ccfc5c-rvfhm_8f65338e-2617-4a88-91ff-3f13acb313bc/keystone-api/0.log" Nov 28 18:39:49 crc kubenswrapper[5024]: I1128 18:39:49.297366 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-7jm78_a6d387e7-2e04-456a-973b-d3d13b988d4b/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:49 crc kubenswrapper[5024]: I1128 18:39:49.557504 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_141d5e1c-7eb9-40c1-9855-c048660125f6/mysqld-exporter/0.log" Nov 28 18:39:49 crc kubenswrapper[5024]: I1128 18:39:49.850991 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-x5g4d_a052b839-2b8d-4f97-afc6-29279c78dbdc/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:49 crc kubenswrapper[5024]: I1128 18:39:49.916794 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7978574989-5r9v4_dce14449-21ac-4abd-9e71-13fa2a0c471b/neutron-httpd/0.log" Nov 28 18:39:49 crc kubenswrapper[5024]: I1128 18:39:49.991383 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7978574989-5r9v4_dce14449-21ac-4abd-9e71-13fa2a0c471b/neutron-api/0.log" Nov 28 18:39:50 crc kubenswrapper[5024]: I1128 18:39:50.510907 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_06a9aedd-7e31-4c76-8ca8-65ede667175e/nova-cell0-conductor-conductor/0.log" Nov 28 18:39:50 crc kubenswrapper[5024]: I1128 18:39:50.941493 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9de4afa0-2f07-41a3-bf8c-a3b3cd056922/nova-api-log/0.log" Nov 28 18:39:51 crc kubenswrapper[5024]: I1128 18:39:51.056329 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_bfa1e2f1-4dce-436d-acb7-9dbb9cc4b22f/nova-cell1-conductor-conductor/0.log" Nov 28 18:39:51 crc kubenswrapper[5024]: I1128 18:39:51.263321 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_512b384d-2288-4ff5-9f13-bc6df840194f/nova-cell1-novncproxy-novncproxy/0.log" Nov 28 18:39:51 crc kubenswrapper[5024]: I1128 18:39:51.264603 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9de4afa0-2f07-41a3-bf8c-a3b3cd056922/nova-api-api/0.log" Nov 28 18:39:51 crc kubenswrapper[5024]: I1128 18:39:51.346476 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-pkt6w_98dfedf7-c96b-4029-8893-74f4abd9124b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:51 crc kubenswrapper[5024]: I1128 18:39:51.567028 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ca6fc94f-0267-49f2-8af3-269c86335d27/nova-metadata-log/0.log" Nov 28 18:39:51 crc kubenswrapper[5024]: I1128 18:39:51.880057 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e558c904-f9dd-4fe7-8a76-80935850c018/nova-scheduler-scheduler/0.log" Nov 28 18:39:51 crc kubenswrapper[5024]: I1128 18:39:51.894405 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_27bdb46e-71e8-41d7-b796-b10d95025f95/mysql-bootstrap/0.log" Nov 28 18:39:52 crc kubenswrapper[5024]: I1128 18:39:52.101803 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_27bdb46e-71e8-41d7-b796-b10d95025f95/mysql-bootstrap/0.log" Nov 28 18:39:52 crc kubenswrapper[5024]: I1128 18:39:52.110877 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_27bdb46e-71e8-41d7-b796-b10d95025f95/galera/0.log" Nov 28 18:39:52 crc kubenswrapper[5024]: I1128 18:39:52.282532 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_89e70753-1dcf-4ff8-8859-5bd6d55cbe47/mysql-bootstrap/0.log" Nov 28 18:39:52 crc kubenswrapper[5024]: I1128 18:39:52.519239 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_89e70753-1dcf-4ff8-8859-5bd6d55cbe47/galera/0.log" Nov 28 18:39:52 crc kubenswrapper[5024]: I1128 18:39:52.540738 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_89e70753-1dcf-4ff8-8859-5bd6d55cbe47/mysql-bootstrap/0.log" Nov 28 18:39:52 crc kubenswrapper[5024]: I1128 18:39:52.712970 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_59a27079-b0f6-49dd-8b5e-516096f3d0e8/openstackclient/0.log" Nov 28 18:39:52 crc kubenswrapper[5024]: I1128 18:39:52.850027 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gwmd4_50b88778-9829-4418-bfc4-a7377039d584/ovn-controller/0.log" Nov 28 18:39:53 crc kubenswrapper[5024]: I1128 18:39:53.058754 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-n7llb_9a326ee1-ef89-452c-a314-fff7af6fb65f/openstack-network-exporter/0.log" Nov 28 18:39:53 crc kubenswrapper[5024]: I1128 18:39:53.384077 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tst7t_b7387769-8164-4608-aa9a-51bf86870cad/ovsdb-server-init/0.log" Nov 28 18:39:53 crc kubenswrapper[5024]: I1128 18:39:53.410650 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tst7t_b7387769-8164-4608-aa9a-51bf86870cad/ovsdb-server-init/0.log" Nov 28 18:39:53 crc kubenswrapper[5024]: I1128 18:39:53.421674 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tst7t_b7387769-8164-4608-aa9a-51bf86870cad/ovs-vswitchd/0.log" Nov 28 18:39:53 crc kubenswrapper[5024]: I1128 18:39:53.571008 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tst7t_b7387769-8164-4608-aa9a-51bf86870cad/ovsdb-server/0.log" Nov 28 18:39:53 crc kubenswrapper[5024]: I1128 18:39:53.695189 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jhknv_804c2c31-2211-4c96-8f9f-a9c96543d8c7/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:53 crc kubenswrapper[5024]: I1128 18:39:53.891315 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4ff0447c-7f25-4d0a-a58b-d5fff6673749/openstack-network-exporter/0.log" Nov 28 18:39:53 crc kubenswrapper[5024]: I1128 18:39:53.909258 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4ff0447c-7f25-4d0a-a58b-d5fff6673749/ovn-northd/0.log" Nov 28 18:39:53 crc kubenswrapper[5024]: I1128 18:39:53.921662 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ca6fc94f-0267-49f2-8af3-269c86335d27/nova-metadata-metadata/0.log" Nov 28 18:39:54 crc kubenswrapper[5024]: I1128 18:39:54.126908 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_620e671a-94a6-4ebb-807d-88c062028090/openstack-network-exporter/0.log" Nov 28 18:39:54 crc kubenswrapper[5024]: I1128 18:39:54.149908 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_620e671a-94a6-4ebb-807d-88c062028090/ovsdbserver-nb/0.log" Nov 28 18:39:54 crc kubenswrapper[5024]: I1128 18:39:54.359979 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_67f2019a-e1ff-46c7-9ec9-a1762e83f1c1/openstack-network-exporter/0.log" Nov 28 18:39:54 crc kubenswrapper[5024]: I1128 18:39:54.400772 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_67f2019a-e1ff-46c7-9ec9-a1762e83f1c1/ovsdbserver-sb/0.log" Nov 28 18:39:54 crc kubenswrapper[5024]: I1128 18:39:54.587726 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5dc99dc88d-6bdv9_94ae62fc-6645-4656-a1e9-9fcedf478bd9/placement-api/0.log" Nov 28 18:39:54 crc kubenswrapper[5024]: I1128 18:39:54.748689 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5dc99dc88d-6bdv9_94ae62fc-6645-4656-a1e9-9fcedf478bd9/placement-log/0.log" Nov 28 18:39:54 crc kubenswrapper[5024]: I1128 18:39:54.769741 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_32f8d83a-8bc1-446c-a314-451f4abd915b/init-config-reloader/0.log" Nov 28 18:39:55 crc kubenswrapper[5024]: I1128 18:39:55.013360 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_32f8d83a-8bc1-446c-a314-451f4abd915b/prometheus/0.log" Nov 28 18:39:55 crc kubenswrapper[5024]: I1128 18:39:55.020665 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_32f8d83a-8bc1-446c-a314-451f4abd915b/thanos-sidecar/0.log" Nov 28 18:39:55 crc kubenswrapper[5024]: I1128 18:39:55.022267 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_32f8d83a-8bc1-446c-a314-451f4abd915b/init-config-reloader/0.log" Nov 28 18:39:55 crc kubenswrapper[5024]: I1128 18:39:55.046566 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_32f8d83a-8bc1-446c-a314-451f4abd915b/config-reloader/0.log" Nov 28 18:39:55 crc kubenswrapper[5024]: I1128 18:39:55.365435 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0fae95bc-19b8-4274-ab02-cc6ebf195fe7/setup-container/0.log" Nov 28 18:39:55 crc kubenswrapper[5024]: I1128 18:39:55.854134 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0fae95bc-19b8-4274-ab02-cc6ebf195fe7/setup-container/0.log" Nov 28 18:39:55 crc kubenswrapper[5024]: I1128 18:39:55.919662 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0fae95bc-19b8-4274-ab02-cc6ebf195fe7/rabbitmq/0.log" Nov 28 18:39:56 crc kubenswrapper[5024]: I1128 18:39:56.024877 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_81a9271f-4842-4922-a19f-11de21871c68/setup-container/0.log" Nov 28 18:39:56 crc kubenswrapper[5024]: I1128 18:39:56.190986 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_81a9271f-4842-4922-a19f-11de21871c68/setup-container/0.log" Nov 28 18:39:56 crc kubenswrapper[5024]: I1128 18:39:56.248675 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_81a9271f-4842-4922-a19f-11de21871c68/rabbitmq/0.log" Nov 28 18:39:56 crc kubenswrapper[5024]: I1128 18:39:56.286817 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bsqns_dce32347-3163-4eaa-8bc8-43e812be9ead/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:56 crc kubenswrapper[5024]: I1128 18:39:56.439790 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-qv6fn_84151dea-3c62-4ac2-a85d-55b7bafba2ac/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:56 crc kubenswrapper[5024]: I1128 18:39:56.532982 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-x6ggt_a6f96dc0-0ac5-4a4a-a888-870195dca5d0/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:56 crc kubenswrapper[5024]: I1128 18:39:56.715797 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-2krjx_13a4c9e2-93df-4ec3-801a-4674e2ac1f50/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:56 crc kubenswrapper[5024]: I1128 18:39:56.871343 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-rs8rp_41635a67-7e43-4d50-a1d7-57c8d6fe55a7/ssh-known-hosts-edpm-deployment/0.log" Nov 28 18:39:57 crc kubenswrapper[5024]: I1128 18:39:57.137617 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-74757657c9-s2n28_634068c7-593f-43ee-8b4e-4be8f66c51c5/proxy-server/0.log" Nov 28 18:39:57 crc kubenswrapper[5024]: I1128 18:39:57.172290 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-74757657c9-s2n28_634068c7-593f-43ee-8b4e-4be8f66c51c5/proxy-httpd/0.log" Nov 28 18:39:57 crc kubenswrapper[5024]: I1128 18:39:57.279334 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-hbk2s_5f5d78ed-9ad2-4d55-939b-6bb8f2bfa7dd/swift-ring-rebalance/0.log" Nov 28 18:39:57 crc kubenswrapper[5024]: I1128 18:39:57.408880 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/account-reaper/0.log" Nov 28 18:39:57 crc kubenswrapper[5024]: I1128 18:39:57.460101 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/account-auditor/0.log" Nov 28 18:39:57 crc kubenswrapper[5024]: I1128 18:39:57.584714 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/account-replicator/0.log" Nov 28 18:39:57 crc kubenswrapper[5024]: I1128 18:39:57.642804 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/account-server/0.log" Nov 28 18:39:57 crc kubenswrapper[5024]: I1128 18:39:57.650795 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/container-auditor/0.log" Nov 28 18:39:57 crc kubenswrapper[5024]: I1128 18:39:57.785694 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/container-replicator/0.log" Nov 28 18:39:57 crc kubenswrapper[5024]: I1128 18:39:57.915212 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/container-updater/0.log" Nov 28 18:39:57 crc kubenswrapper[5024]: I1128 18:39:57.929783 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/object-auditor/0.log" Nov 28 18:39:57 crc kubenswrapper[5024]: I1128 18:39:57.937598 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/container-server/0.log" Nov 28 18:39:58 crc kubenswrapper[5024]: I1128 18:39:58.105471 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/object-expirer/0.log" Nov 28 18:39:58 crc kubenswrapper[5024]: I1128 18:39:58.173939 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/object-replicator/0.log" Nov 28 18:39:58 crc kubenswrapper[5024]: I1128 18:39:58.222640 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/object-server/0.log" Nov 28 18:39:58 crc kubenswrapper[5024]: I1128 18:39:58.344517 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/object-updater/0.log" Nov 28 18:39:58 crc kubenswrapper[5024]: I1128 18:39:58.357986 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/rsync/0.log" Nov 28 18:39:58 crc kubenswrapper[5024]: I1128 18:39:58.474888 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_aa2554f8-7d4e-425d-a74a-3322dc09d7ed/swift-recon-cron/0.log" Nov 28 18:39:58 crc kubenswrapper[5024]: I1128 18:39:58.613410 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-cbr5c_2b7c4fbd-b022-4a14-ae1a-18dfa307493f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:58 crc kubenswrapper[5024]: I1128 18:39:58.731852 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-t2l49_68ce2acd-5232-4e99-8f05-0c0e50c1d060/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:58 crc kubenswrapper[5024]: I1128 18:39:58.969592 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b1fff600-22cd-4f7e-bc4c-f666a06c01bb/test-operator-logs-container/0.log" Nov 28 18:39:59 crc kubenswrapper[5024]: I1128 18:39:59.175735 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4d86n_a43b660d-89bb-407a-8661-654ddda19d22/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 18:39:59 crc kubenswrapper[5024]: I1128 18:39:59.845335 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_38ea9d2b-3972-4bda-9cdd-c341334be5d1/tempest-tests-tempest-tests-runner/0.log" Nov 28 18:40:06 crc kubenswrapper[5024]: I1128 18:40:06.697778 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_1fe32246-2e6f-47af-85ae-ea93f6e05037/memcached/0.log" Nov 28 18:40:27 crc kubenswrapper[5024]: I1128 18:40:27.855645 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-b7b9m_306b6495-72ef-41db-8bb8-7e3c7f4105f1/kube-rbac-proxy/0.log" Nov 28 18:40:27 crc kubenswrapper[5024]: I1128 18:40:27.983066 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-b7b9m_306b6495-72ef-41db-8bb8-7e3c7f4105f1/manager/0.log" Nov 28 18:40:28 crc kubenswrapper[5024]: I1128 18:40:28.123117 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-v2mb6_433b0a08-3f38-4113-bab1-49eb5f2e0009/kube-rbac-proxy/0.log" Nov 28 18:40:28 crc kubenswrapper[5024]: I1128 18:40:28.186658 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-v2mb6_433b0a08-3f38-4113-bab1-49eb5f2e0009/manager/0.log" Nov 28 18:40:28 crc kubenswrapper[5024]: I1128 18:40:28.272876 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-mvhfv_8f617e42-6f3a-45cd-86c7-58b571a13c00/kube-rbac-proxy/0.log" Nov 28 18:40:28 crc kubenswrapper[5024]: I1128 18:40:28.328357 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-mvhfv_8f617e42-6f3a-45cd-86c7-58b571a13c00/manager/0.log" Nov 28 18:40:28 crc kubenswrapper[5024]: I1128 18:40:28.477195 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4_f1a26dc6-74d7-4850-9af2-2e136ce1a480/util/0.log" Nov 28 18:40:28 crc kubenswrapper[5024]: I1128 18:40:28.641205 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4_f1a26dc6-74d7-4850-9af2-2e136ce1a480/pull/0.log" Nov 28 18:40:28 crc kubenswrapper[5024]: I1128 18:40:28.649661 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4_f1a26dc6-74d7-4850-9af2-2e136ce1a480/util/0.log" Nov 28 18:40:28 crc kubenswrapper[5024]: I1128 18:40:28.705721 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4_f1a26dc6-74d7-4850-9af2-2e136ce1a480/pull/0.log" Nov 28 18:40:28 crc kubenswrapper[5024]: I1128 18:40:28.835307 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4_f1a26dc6-74d7-4850-9af2-2e136ce1a480/pull/0.log" Nov 28 18:40:28 crc kubenswrapper[5024]: I1128 18:40:28.848523 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4_f1a26dc6-74d7-4850-9af2-2e136ce1a480/util/0.log" Nov 28 18:40:28 crc kubenswrapper[5024]: I1128 18:40:28.857772 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb9klw4_f1a26dc6-74d7-4850-9af2-2e136ce1a480/extract/0.log" Nov 28 18:40:29 crc kubenswrapper[5024]: I1128 18:40:29.019238 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-5vxc8_c242c002-7db6-4753-9e37-8b61faa233e7/kube-rbac-proxy/0.log" Nov 28 18:40:29 crc kubenswrapper[5024]: I1128 18:40:29.069512 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-5vxc8_c242c002-7db6-4753-9e37-8b61faa233e7/manager/0.log" Nov 28 18:40:29 crc kubenswrapper[5024]: I1128 18:40:29.079164 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-vk754_7b427f08-8eba-4f54-ad75-6cf94b532537/kube-rbac-proxy/0.log" Nov 28 18:40:29 crc kubenswrapper[5024]: I1128 18:40:29.329118 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-vk754_7b427f08-8eba-4f54-ad75-6cf94b532537/manager/0.log" Nov 28 18:40:29 crc kubenswrapper[5024]: I1128 18:40:29.332100 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-htnxm_dd8097de-552e-414a-98d1-314930b2d45b/manager/0.log" Nov 28 18:40:29 crc kubenswrapper[5024]: I1128 18:40:29.346611 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-htnxm_dd8097de-552e-414a-98d1-314930b2d45b/kube-rbac-proxy/0.log" Nov 28 18:40:29 crc kubenswrapper[5024]: I1128 18:40:29.558208 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-nxs7s_7178ca93-de7b-4c2b-8235-41c6dbd4b1a1/kube-rbac-proxy/0.log" Nov 28 18:40:29 crc kubenswrapper[5024]: I1128 18:40:29.762797 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-nxs7s_7178ca93-de7b-4c2b-8235-41c6dbd4b1a1/manager/0.log" Nov 28 18:40:29 crc kubenswrapper[5024]: I1128 18:40:29.788531 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-6wjhl_0c2c7e62-d724-45fa-8058-085b951992fc/kube-rbac-proxy/0.log" Nov 28 18:40:29 crc kubenswrapper[5024]: I1128 18:40:29.849924 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-6wjhl_0c2c7e62-d724-45fa-8058-085b951992fc/manager/0.log" Nov 28 18:40:29 crc kubenswrapper[5024]: I1128 18:40:29.947180 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-k8qw6_c19bfd5c-ac24-41e8-95d0-1c0b6661032d/kube-rbac-proxy/0.log" Nov 28 18:40:30 crc kubenswrapper[5024]: I1128 18:40:30.041913 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-k8qw6_c19bfd5c-ac24-41e8-95d0-1c0b6661032d/manager/0.log" Nov 28 18:40:30 crc kubenswrapper[5024]: I1128 18:40:30.081947 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-xb9dw_f3789406-9551-4b4e-9145-86152566a0f8/kube-rbac-proxy/0.log" Nov 28 18:40:30 crc kubenswrapper[5024]: I1128 18:40:30.148890 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-xb9dw_f3789406-9551-4b4e-9145-86152566a0f8/manager/0.log" Nov 28 18:40:30 crc kubenswrapper[5024]: I1128 18:40:30.216384 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-nwtnw_14970290-c7f7-4b41-9238-1c4127416b42/kube-rbac-proxy/0.log" Nov 28 18:40:30 crc kubenswrapper[5024]: I1128 18:40:30.291176 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-nwtnw_14970290-c7f7-4b41-9238-1c4127416b42/manager/0.log" Nov 28 18:40:30 crc kubenswrapper[5024]: I1128 18:40:30.467258 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-t8wwx_3052f534-e5d3-4ac8-8865-8a6de75dc6a2/manager/0.log" Nov 28 18:40:30 crc kubenswrapper[5024]: I1128 18:40:30.485798 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-t8wwx_3052f534-e5d3-4ac8-8865-8a6de75dc6a2/kube-rbac-proxy/0.log" Nov 28 18:40:30 crc kubenswrapper[5024]: I1128 18:40:30.639351 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-tqqp8_cdc496b3-475b-4a1a-8426-c5f470030d20/kube-rbac-proxy/0.log" Nov 28 18:40:30 crc kubenswrapper[5024]: I1128 18:40:30.711939 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-tqqp8_cdc496b3-475b-4a1a-8426-c5f470030d20/manager/0.log" Nov 28 18:40:30 crc kubenswrapper[5024]: I1128 18:40:30.729758 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-98vj7_6634c4c8-389e-4b40-bc1b-c21e833569cd/kube-rbac-proxy/0.log" Nov 28 18:40:30 crc kubenswrapper[5024]: I1128 18:40:30.830341 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-98vj7_6634c4c8-389e-4b40-bc1b-c21e833569cd/manager/0.log" Nov 28 18:40:30 crc kubenswrapper[5024]: I1128 18:40:30.888418 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr_ec29f6e1-030b-4bce-a179-102ef4038e17/kube-rbac-proxy/0.log" Nov 28 18:40:30 crc kubenswrapper[5024]: I1128 18:40:30.958606 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4whwmr_ec29f6e1-030b-4bce-a179-102ef4038e17/manager/0.log" Nov 28 18:40:31 crc kubenswrapper[5024]: I1128 18:40:31.398608 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-96cfcb97f-jcxf2_a1ca8cb5-5428-42b6-a72a-332ee1851a88/operator/0.log" Nov 28 18:40:31 crc kubenswrapper[5024]: I1128 18:40:31.509350 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-pvv5m_6dfe3a90-7ca0-4e52-9c18-4cb3f828aca6/registry-server/0.log" Nov 28 18:40:31 crc kubenswrapper[5024]: I1128 18:40:31.662665 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-gdvrn_fd737aa9-6973-41a6-8b79-03d85540253c/kube-rbac-proxy/0.log" Nov 28 18:40:31 crc kubenswrapper[5024]: I1128 18:40:31.800048 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-hrbx6_f9991185-b617-4567-b70f-4adf629d5aab/kube-rbac-proxy/0.log" Nov 28 18:40:31 crc kubenswrapper[5024]: I1128 18:40:31.849227 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-gdvrn_fd737aa9-6973-41a6-8b79-03d85540253c/manager/0.log" Nov 28 18:40:31 crc kubenswrapper[5024]: I1128 18:40:31.934350 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-hrbx6_f9991185-b617-4567-b70f-4adf629d5aab/manager/0.log" Nov 28 18:40:32 crc kubenswrapper[5024]: I1128 18:40:32.102995 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-phvrw_c8d40417-67d5-4a1c-ab22-1f2afd6f1ff2/operator/0.log" Nov 28 18:40:32 crc kubenswrapper[5024]: I1128 18:40:32.164885 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-27b8t_c98df7f0-4e94-48f8-9ef1-2148b7909e24/kube-rbac-proxy/0.log" Nov 28 18:40:32 crc kubenswrapper[5024]: I1128 18:40:32.360708 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-27b8t_c98df7f0-4e94-48f8-9ef1-2148b7909e24/manager/0.log" Nov 28 18:40:32 crc kubenswrapper[5024]: I1128 18:40:32.376896 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6b5d64d475-v8bhk_7bfcb463-0064-4758-bbe8-70b0afd2b3bd/kube-rbac-proxy/0.log" Nov 28 18:40:32 crc kubenswrapper[5024]: I1128 18:40:32.648991 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-skq8p_09ca01b9-ef1e-443d-90af-101d476cbcb5/manager/0.log" Nov 28 18:40:32 crc kubenswrapper[5024]: I1128 18:40:32.670701 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-skq8p_09ca01b9-ef1e-443d-90af-101d476cbcb5/kube-rbac-proxy/0.log" Nov 28 18:40:32 crc kubenswrapper[5024]: I1128 18:40:32.752394 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-668879d68f-zgrkk_e3a51773-e3f0-4e2f-b53c-8eede799ef4b/manager/0.log" Nov 28 18:40:32 crc kubenswrapper[5024]: I1128 18:40:32.816466 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6b5d64d475-v8bhk_7bfcb463-0064-4758-bbe8-70b0afd2b3bd/manager/0.log" Nov 28 18:40:32 crc kubenswrapper[5024]: I1128 18:40:32.822152 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-9zx4m_3d3cfd45-e574-415e-87a6-2fab660d955a/kube-rbac-proxy/0.log" Nov 28 18:40:32 crc kubenswrapper[5024]: I1128 18:40:32.949834 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-9zx4m_3d3cfd45-e574-415e-87a6-2fab660d955a/manager/0.log" Nov 28 18:40:52 crc kubenswrapper[5024]: I1128 18:40:52.088358 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2cw8g_10abaa97-056b-4cd6-adbb-36b64dcef7cd/control-plane-machine-set-operator/0.log" Nov 28 18:40:52 crc kubenswrapper[5024]: I1128 18:40:52.187887 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vk6x4_c1be805d-70ab-4dfa-aa6f-23b846d64124/kube-rbac-proxy/0.log" Nov 28 18:40:52 crc kubenswrapper[5024]: I1128 18:40:52.293896 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vk6x4_c1be805d-70ab-4dfa-aa6f-23b846d64124/machine-api-operator/0.log" Nov 28 18:41:05 crc kubenswrapper[5024]: I1128 18:41:05.288194 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-l8xp5_57f11f62-daab-4268-9107-f97095a8cc24/cert-manager-controller/0.log" Nov 28 18:41:05 crc kubenswrapper[5024]: I1128 18:41:05.751314 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-v2rnd_c469de85-5ad7-4f96-9db9-d4db161236d9/cert-manager-cainjector/0.log" Nov 28 18:41:05 crc kubenswrapper[5024]: I1128 18:41:05.811426 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-l7mzh_184af68b-5dc9-41ec-b2fc-11ea0e1cb8ac/cert-manager-webhook/0.log" Nov 28 18:41:18 crc kubenswrapper[5024]: I1128 18:41:18.433036 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-cqwwz_fda0f5a7-9a36-4090-8a0e-f3c635396eff/nmstate-console-plugin/0.log" Nov 28 18:41:18 crc kubenswrapper[5024]: I1128 18:41:18.585901 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8gxnt_d996313c-5bc6-4930-a202-ca55774866c0/nmstate-handler/0.log" Nov 28 18:41:18 crc kubenswrapper[5024]: I1128 18:41:18.673825 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-spqhp_570a7ddb-1a00-4e87-8db0-32760d8455d9/kube-rbac-proxy/0.log" Nov 28 18:41:18 crc kubenswrapper[5024]: I1128 18:41:18.697415 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-spqhp_570a7ddb-1a00-4e87-8db0-32760d8455d9/nmstate-metrics/0.log" Nov 28 18:41:18 crc kubenswrapper[5024]: I1128 18:41:18.801672 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-gkv54_34dea1ac-8ada-4d52-b458-6383c62ad1d4/nmstate-operator/0.log" Nov 28 18:41:18 crc kubenswrapper[5024]: I1128 18:41:18.907460 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-pw8c8_bd456cf2-7c4f-4ba6-9be7-85d96c86e3a5/nmstate-webhook/0.log" Nov 28 18:41:31 crc kubenswrapper[5024]: I1128 18:41:31.647901 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-d7f585bbf-gt482_f89c9ab8-a552-4228-9dbc-2af4129a1be3/kube-rbac-proxy/0.log" Nov 28 18:41:31 crc kubenswrapper[5024]: I1128 18:41:31.681185 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-d7f585bbf-gt482_f89c9ab8-a552-4228-9dbc-2af4129a1be3/manager/0.log" Nov 28 18:41:45 crc kubenswrapper[5024]: I1128 18:41:45.492512 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-zdmd8_5a04dfeb-c7c2-443a-affd-11879c5e2b5d/cluster-logging-operator/0.log" Nov 28 18:41:45 crc kubenswrapper[5024]: I1128 18:41:45.667266 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-4f7qn_c0fc3afa-db9d-4db8-9d2b-acf321068b1e/collector/0.log" Nov 28 18:41:45 crc kubenswrapper[5024]: I1128 18:41:45.696866 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_50be66da-8b03-4827-8012-25c2140b64ac/loki-compactor/0.log" Nov 28 18:41:45 crc kubenswrapper[5024]: I1128 18:41:45.889343 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-8f58fb6f6-qsbvr_000df583-958f-43ae-b8f5-36a537d3d3d8/gateway/0.log" Nov 28 18:41:45 crc kubenswrapper[5024]: I1128 18:41:45.920407 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-mm6j7_bfbd5a2d-412b-4b26-9205-aaa29032a355/loki-distributor/0.log" Nov 28 18:41:45 crc kubenswrapper[5024]: I1128 18:41:45.975178 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-8f58fb6f6-qsbvr_000df583-958f-43ae-b8f5-36a537d3d3d8/opa/0.log" Nov 28 18:41:46 crc kubenswrapper[5024]: I1128 18:41:46.093276 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-8f58fb6f6-zdmvm_c46c86f9-64ab-4020-9c49-799d926ba3ad/gateway/0.log" Nov 28 18:41:46 crc kubenswrapper[5024]: I1128 18:41:46.138857 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-8f58fb6f6-zdmvm_c46c86f9-64ab-4020-9c49-799d926ba3ad/opa/0.log" Nov 28 18:41:46 crc kubenswrapper[5024]: I1128 18:41:46.269821 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_15f007e2-eb1e-43b1-94cd-cf82cfadad4e/loki-index-gateway/0.log" Nov 28 18:41:46 crc kubenswrapper[5024]: I1128 18:41:46.401214 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_91551520-15fb-40e8-9289-842fbcfadb7f/loki-ingester/0.log" Nov 28 18:41:46 crc kubenswrapper[5024]: I1128 18:41:46.471588 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-9pdl6_f9c34353-2dbd-495c-9fc8-44773dc2bd68/loki-querier/0.log" Nov 28 18:41:46 crc kubenswrapper[5024]: I1128 18:41:46.593892 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-jzttp_98fe9e7c-1bfa-4f87-8c04-7c0a660db429/loki-query-frontend/0.log" Nov 28 18:42:00 crc kubenswrapper[5024]: I1128 18:42:00.143723 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-gh2lw_7a5e4911-39d4-47fb-84f6-b7382b5d3d0c/kube-rbac-proxy/0.log" Nov 28 18:42:00 crc kubenswrapper[5024]: I1128 18:42:00.325777 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-gh2lw_7a5e4911-39d4-47fb-84f6-b7382b5d3d0c/controller/0.log" Nov 28 18:42:00 crc kubenswrapper[5024]: I1128 18:42:00.361896 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/cp-frr-files/0.log" Nov 28 18:42:00 crc kubenswrapper[5024]: I1128 18:42:00.576660 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/cp-reloader/0.log" Nov 28 18:42:00 crc kubenswrapper[5024]: I1128 18:42:00.600201 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/cp-metrics/0.log" Nov 28 18:42:00 crc kubenswrapper[5024]: I1128 18:42:00.600212 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/cp-frr-files/0.log" Nov 28 18:42:00 crc kubenswrapper[5024]: I1128 18:42:00.609306 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/cp-reloader/0.log" Nov 28 18:42:00 crc kubenswrapper[5024]: I1128 18:42:00.748468 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/cp-frr-files/0.log" Nov 28 18:42:00 crc kubenswrapper[5024]: I1128 18:42:00.829911 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/cp-metrics/0.log" Nov 28 18:42:00 crc kubenswrapper[5024]: I1128 18:42:00.843471 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/cp-metrics/0.log" Nov 28 18:42:00 crc kubenswrapper[5024]: I1128 18:42:00.843615 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/cp-reloader/0.log" Nov 28 18:42:00 crc kubenswrapper[5024]: I1128 18:42:00.976187 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/cp-reloader/0.log" Nov 28 18:42:00 crc kubenswrapper[5024]: I1128 18:42:00.994088 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/cp-frr-files/0.log" Nov 28 18:42:01 crc kubenswrapper[5024]: I1128 18:42:01.042743 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/cp-metrics/0.log" Nov 28 18:42:01 crc kubenswrapper[5024]: I1128 18:42:01.044276 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/controller/0.log" Nov 28 18:42:01 crc kubenswrapper[5024]: I1128 18:42:01.208854 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/frr-metrics/0.log" Nov 28 18:42:01 crc kubenswrapper[5024]: I1128 18:42:01.232716 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/kube-rbac-proxy-frr/0.log" Nov 28 18:42:01 crc kubenswrapper[5024]: I1128 18:42:01.261627 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/kube-rbac-proxy/0.log" Nov 28 18:42:01 crc kubenswrapper[5024]: I1128 18:42:01.415758 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/reloader/0.log" Nov 28 18:42:01 crc kubenswrapper[5024]: I1128 18:42:01.552818 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-8v44d_0b9fbfa7-b944-4a28-b32e-011324bf44b7/frr-k8s-webhook-server/0.log" Nov 28 18:42:01 crc kubenswrapper[5024]: I1128 18:42:01.676427 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-78fcc557d5-tzzx8_94c85c6d-1f63-4a43-96a5-850aae6a27cf/manager/0.log" Nov 28 18:42:01 crc kubenswrapper[5024]: I1128 18:42:01.903105 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6786944b4d-h88pn_2675bece-a200-49ea-a9b0-5e394ae7167d/webhook-server/0.log" Nov 28 18:42:01 crc kubenswrapper[5024]: I1128 18:42:01.958554 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kwp5s_a57fb6ba-d2d8-4e51-8960-a1a15e92c950/kube-rbac-proxy/0.log" Nov 28 18:42:02 crc kubenswrapper[5024]: I1128 18:42:02.688734 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kwp5s_a57fb6ba-d2d8-4e51-8960-a1a15e92c950/speaker/0.log" Nov 28 18:42:03 crc kubenswrapper[5024]: I1128 18:42:03.260380 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fbhkx_63ee2602-779a-4f8d-89e8-e741417fcba9/frr/0.log" Nov 28 18:42:07 crc kubenswrapper[5024]: I1128 18:42:07.564941 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:42:07 crc kubenswrapper[5024]: I1128 18:42:07.566864 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.307280 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t8jbv"] Nov 28 18:42:09 crc kubenswrapper[5024]: E1128 18:42:09.308111 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c567f4-82e2-43d6-b357-ba5ac7f30cbe" containerName="container-00" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.308127 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c567f4-82e2-43d6-b357-ba5ac7f30cbe" containerName="container-00" Nov 28 18:42:09 crc kubenswrapper[5024]: E1128 18:42:09.308154 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f2f22d-175d-4e45-a63b-82558fb12878" containerName="extract-content" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.308160 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f2f22d-175d-4e45-a63b-82558fb12878" containerName="extract-content" Nov 28 18:42:09 crc kubenswrapper[5024]: E1128 18:42:09.308174 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f2f22d-175d-4e45-a63b-82558fb12878" containerName="extract-utilities" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.308181 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f2f22d-175d-4e45-a63b-82558fb12878" containerName="extract-utilities" Nov 28 18:42:09 crc kubenswrapper[5024]: E1128 18:42:09.308200 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f2f22d-175d-4e45-a63b-82558fb12878" containerName="registry-server" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.308206 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f2f22d-175d-4e45-a63b-82558fb12878" containerName="registry-server" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.308406 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2f2f22d-175d-4e45-a63b-82558fb12878" containerName="registry-server" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.308426 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c567f4-82e2-43d6-b357-ba5ac7f30cbe" containerName="container-00" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.310616 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.324866 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8jbv"] Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.383209 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866e57be-1edf-4898-9146-c88bf111de09-catalog-content\") pod \"redhat-marketplace-t8jbv\" (UID: \"866e57be-1edf-4898-9146-c88bf111de09\") " pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.383407 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nfcf\" (UniqueName: \"kubernetes.io/projected/866e57be-1edf-4898-9146-c88bf111de09-kube-api-access-5nfcf\") pod \"redhat-marketplace-t8jbv\" (UID: \"866e57be-1edf-4898-9146-c88bf111de09\") " pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.383466 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866e57be-1edf-4898-9146-c88bf111de09-utilities\") pod \"redhat-marketplace-t8jbv\" (UID: \"866e57be-1edf-4898-9146-c88bf111de09\") " pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.486300 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866e57be-1edf-4898-9146-c88bf111de09-catalog-content\") pod \"redhat-marketplace-t8jbv\" (UID: \"866e57be-1edf-4898-9146-c88bf111de09\") " pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.486508 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nfcf\" (UniqueName: \"kubernetes.io/projected/866e57be-1edf-4898-9146-c88bf111de09-kube-api-access-5nfcf\") pod \"redhat-marketplace-t8jbv\" (UID: \"866e57be-1edf-4898-9146-c88bf111de09\") " pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.486583 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866e57be-1edf-4898-9146-c88bf111de09-utilities\") pod \"redhat-marketplace-t8jbv\" (UID: \"866e57be-1edf-4898-9146-c88bf111de09\") " pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.486837 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866e57be-1edf-4898-9146-c88bf111de09-catalog-content\") pod \"redhat-marketplace-t8jbv\" (UID: \"866e57be-1edf-4898-9146-c88bf111de09\") " pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.487100 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866e57be-1edf-4898-9146-c88bf111de09-utilities\") pod \"redhat-marketplace-t8jbv\" (UID: \"866e57be-1edf-4898-9146-c88bf111de09\") " pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.506012 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nfcf\" (UniqueName: \"kubernetes.io/projected/866e57be-1edf-4898-9146-c88bf111de09-kube-api-access-5nfcf\") pod \"redhat-marketplace-t8jbv\" (UID: \"866e57be-1edf-4898-9146-c88bf111de09\") " pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:09 crc kubenswrapper[5024]: I1128 18:42:09.635210 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:10 crc kubenswrapper[5024]: I1128 18:42:10.194930 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8jbv"] Nov 28 18:42:10 crc kubenswrapper[5024]: I1128 18:42:10.973582 5024 generic.go:334] "Generic (PLEG): container finished" podID="866e57be-1edf-4898-9146-c88bf111de09" containerID="b1daed28bf2cecab26be66c4ed84f8a15d60b1886117940465bf9e670fb01568" exitCode=0 Nov 28 18:42:10 crc kubenswrapper[5024]: I1128 18:42:10.973737 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8jbv" event={"ID":"866e57be-1edf-4898-9146-c88bf111de09","Type":"ContainerDied","Data":"b1daed28bf2cecab26be66c4ed84f8a15d60b1886117940465bf9e670fb01568"} Nov 28 18:42:10 crc kubenswrapper[5024]: I1128 18:42:10.973930 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8jbv" event={"ID":"866e57be-1edf-4898-9146-c88bf111de09","Type":"ContainerStarted","Data":"282ce2905dc77e3408c434e6f207f4947886a720c8c09a6672ae6587ef531737"} Nov 28 18:42:11 crc kubenswrapper[5024]: I1128 18:42:11.584637 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 18:42:12 crc kubenswrapper[5024]: I1128 18:42:12.995153 5024 generic.go:334] "Generic (PLEG): container finished" podID="866e57be-1edf-4898-9146-c88bf111de09" containerID="06af63a1134e752b97ecf5a078bfc10e17daeb777b8c087825232bd4062d88c8" exitCode=0 Nov 28 18:42:12 crc kubenswrapper[5024]: I1128 18:42:12.995341 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8jbv" event={"ID":"866e57be-1edf-4898-9146-c88bf111de09","Type":"ContainerDied","Data":"06af63a1134e752b97ecf5a078bfc10e17daeb777b8c087825232bd4062d88c8"} Nov 28 18:42:14 crc kubenswrapper[5024]: I1128 18:42:14.014200 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8jbv" event={"ID":"866e57be-1edf-4898-9146-c88bf111de09","Type":"ContainerStarted","Data":"9a979e3e6347db8f22e24570bed441661aa02240e4fb2934f829ab22341ac482"} Nov 28 18:42:14 crc kubenswrapper[5024]: I1128 18:42:14.039233 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t8jbv" podStartSLOduration=2.442453458 podStartE2EDuration="5.039208074s" podCreationTimestamp="2025-11-28 18:42:09 +0000 UTC" firstStartedPulling="2025-11-28 18:42:10.976112467 +0000 UTC m=+6233.025033372" lastFinishedPulling="2025-11-28 18:42:13.572867083 +0000 UTC m=+6235.621787988" observedRunningTime="2025-11-28 18:42:14.036163977 +0000 UTC m=+6236.085084882" watchObservedRunningTime="2025-11-28 18:42:14.039208074 +0000 UTC m=+6236.088128979" Nov 28 18:42:17 crc kubenswrapper[5024]: I1128 18:42:17.642462 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g_a2f27c25-5fba-497d-ab04-88a773c09bf7/util/0.log" Nov 28 18:42:17 crc kubenswrapper[5024]: I1128 18:42:17.863361 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g_a2f27c25-5fba-497d-ab04-88a773c09bf7/pull/0.log" Nov 28 18:42:17 crc kubenswrapper[5024]: I1128 18:42:17.870452 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g_a2f27c25-5fba-497d-ab04-88a773c09bf7/util/0.log" Nov 28 18:42:17 crc kubenswrapper[5024]: I1128 18:42:17.890971 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g_a2f27c25-5fba-497d-ab04-88a773c09bf7/pull/0.log" Nov 28 18:42:18 crc kubenswrapper[5024]: I1128 18:42:18.070003 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g_a2f27c25-5fba-497d-ab04-88a773c09bf7/util/0.log" Nov 28 18:42:18 crc kubenswrapper[5024]: I1128 18:42:18.115052 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g_a2f27c25-5fba-497d-ab04-88a773c09bf7/pull/0.log" Nov 28 18:42:18 crc kubenswrapper[5024]: I1128 18:42:18.137954 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8k7s2g_a2f27c25-5fba-497d-ab04-88a773c09bf7/extract/0.log" Nov 28 18:42:18 crc kubenswrapper[5024]: I1128 18:42:18.280019 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl_0cdd446a-fa00-4fe8-8a53-979244f522b4/util/0.log" Nov 28 18:42:18 crc kubenswrapper[5024]: I1128 18:42:18.454287 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl_0cdd446a-fa00-4fe8-8a53-979244f522b4/util/0.log" Nov 28 18:42:18 crc kubenswrapper[5024]: I1128 18:42:18.467542 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl_0cdd446a-fa00-4fe8-8a53-979244f522b4/pull/0.log" Nov 28 18:42:18 crc kubenswrapper[5024]: I1128 18:42:18.494196 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl_0cdd446a-fa00-4fe8-8a53-979244f522b4/pull/0.log" Nov 28 18:42:18 crc kubenswrapper[5024]: I1128 18:42:18.686493 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl_0cdd446a-fa00-4fe8-8a53-979244f522b4/extract/0.log" Nov 28 18:42:18 crc kubenswrapper[5024]: I1128 18:42:18.694054 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl_0cdd446a-fa00-4fe8-8a53-979244f522b4/util/0.log" Nov 28 18:42:18 crc kubenswrapper[5024]: I1128 18:42:18.732430 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fl7bpl_0cdd446a-fa00-4fe8-8a53-979244f522b4/pull/0.log" Nov 28 18:42:18 crc kubenswrapper[5024]: I1128 18:42:18.903590 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx_d8b87fe5-2e8a-4f1c-9ca4-4732b192d121/util/0.log" Nov 28 18:42:19 crc kubenswrapper[5024]: I1128 18:42:19.094718 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx_d8b87fe5-2e8a-4f1c-9ca4-4732b192d121/pull/0.log" Nov 28 18:42:19 crc kubenswrapper[5024]: I1128 18:42:19.151319 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx_d8b87fe5-2e8a-4f1c-9ca4-4732b192d121/util/0.log" Nov 28 18:42:19 crc kubenswrapper[5024]: I1128 18:42:19.199965 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx_d8b87fe5-2e8a-4f1c-9ca4-4732b192d121/pull/0.log" Nov 28 18:42:19 crc kubenswrapper[5024]: I1128 18:42:19.317421 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx_d8b87fe5-2e8a-4f1c-9ca4-4732b192d121/util/0.log" Nov 28 18:42:19 crc kubenswrapper[5024]: I1128 18:42:19.343169 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx_d8b87fe5-2e8a-4f1c-9ca4-4732b192d121/pull/0.log" Nov 28 18:42:19 crc kubenswrapper[5024]: I1128 18:42:19.448620 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921064mkx_d8b87fe5-2e8a-4f1c-9ca4-4732b192d121/extract/0.log" Nov 28 18:42:19 crc kubenswrapper[5024]: I1128 18:42:19.540703 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt_b90e9055-da41-4e44-b546-6b1de6fd44eb/util/0.log" Nov 28 18:42:19 crc kubenswrapper[5024]: I1128 18:42:19.636240 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:19 crc kubenswrapper[5024]: I1128 18:42:19.636409 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:19 crc kubenswrapper[5024]: I1128 18:42:19.697052 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:19 crc kubenswrapper[5024]: I1128 18:42:19.738619 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt_b90e9055-da41-4e44-b546-6b1de6fd44eb/pull/0.log" Nov 28 18:42:19 crc kubenswrapper[5024]: I1128 18:42:19.783265 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt_b90e9055-da41-4e44-b546-6b1de6fd44eb/util/0.log" Nov 28 18:42:19 crc kubenswrapper[5024]: I1128 18:42:19.823555 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt_b90e9055-da41-4e44-b546-6b1de6fd44eb/pull/0.log" Nov 28 18:42:19 crc kubenswrapper[5024]: I1128 18:42:19.997833 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt_b90e9055-da41-4e44-b546-6b1de6fd44eb/pull/0.log" Nov 28 18:42:20 crc kubenswrapper[5024]: I1128 18:42:20.019790 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt_b90e9055-da41-4e44-b546-6b1de6fd44eb/extract/0.log" Nov 28 18:42:20 crc kubenswrapper[5024]: I1128 18:42:20.045718 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fb2cbt_b90e9055-da41-4e44-b546-6b1de6fd44eb/util/0.log" Nov 28 18:42:20 crc kubenswrapper[5024]: I1128 18:42:20.144491 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:20 crc kubenswrapper[5024]: I1128 18:42:20.238804 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8jbv"] Nov 28 18:42:20 crc kubenswrapper[5024]: I1128 18:42:20.279087 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj_0814b582-694e-41f0-bcd0-04311a2471d2/util/0.log" Nov 28 18:42:20 crc kubenswrapper[5024]: I1128 18:42:20.474277 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj_0814b582-694e-41f0-bcd0-04311a2471d2/util/0.log" Nov 28 18:42:20 crc kubenswrapper[5024]: I1128 18:42:20.499566 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj_0814b582-694e-41f0-bcd0-04311a2471d2/pull/0.log" Nov 28 18:42:20 crc kubenswrapper[5024]: I1128 18:42:20.516761 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj_0814b582-694e-41f0-bcd0-04311a2471d2/pull/0.log" Nov 28 18:42:20 crc kubenswrapper[5024]: I1128 18:42:20.730879 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj_0814b582-694e-41f0-bcd0-04311a2471d2/util/0.log" Nov 28 18:42:20 crc kubenswrapper[5024]: I1128 18:42:20.775122 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj_0814b582-694e-41f0-bcd0-04311a2471d2/extract/0.log" Nov 28 18:42:20 crc kubenswrapper[5024]: I1128 18:42:20.785342 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f832gfpj_0814b582-694e-41f0-bcd0-04311a2471d2/pull/0.log" Nov 28 18:42:20 crc kubenswrapper[5024]: I1128 18:42:20.980910 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9fxw_4f432326-1cdb-4caf-a0d0-c25304f63d47/extract-utilities/0.log" Nov 28 18:42:21 crc kubenswrapper[5024]: I1128 18:42:21.145661 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9fxw_4f432326-1cdb-4caf-a0d0-c25304f63d47/extract-utilities/0.log" Nov 28 18:42:21 crc kubenswrapper[5024]: I1128 18:42:21.178658 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9fxw_4f432326-1cdb-4caf-a0d0-c25304f63d47/extract-content/0.log" Nov 28 18:42:21 crc kubenswrapper[5024]: I1128 18:42:21.185099 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9fxw_4f432326-1cdb-4caf-a0d0-c25304f63d47/extract-content/0.log" Nov 28 18:42:21 crc kubenswrapper[5024]: I1128 18:42:21.367244 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9fxw_4f432326-1cdb-4caf-a0d0-c25304f63d47/extract-utilities/0.log" Nov 28 18:42:21 crc kubenswrapper[5024]: I1128 18:42:21.397949 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9fxw_4f432326-1cdb-4caf-a0d0-c25304f63d47/extract-content/0.log" Nov 28 18:42:21 crc kubenswrapper[5024]: I1128 18:42:21.438375 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dk5n8_a540a1fb-d34b-4c55-8262-e355bfc402b7/extract-utilities/0.log" Nov 28 18:42:21 crc kubenswrapper[5024]: I1128 18:42:21.615439 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dk5n8_a540a1fb-d34b-4c55-8262-e355bfc402b7/extract-utilities/0.log" Nov 28 18:42:21 crc kubenswrapper[5024]: I1128 18:42:21.624796 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dk5n8_a540a1fb-d34b-4c55-8262-e355bfc402b7/extract-content/0.log" Nov 28 18:42:21 crc kubenswrapper[5024]: I1128 18:42:21.719367 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dk5n8_a540a1fb-d34b-4c55-8262-e355bfc402b7/extract-content/0.log" Nov 28 18:42:21 crc kubenswrapper[5024]: I1128 18:42:21.844008 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9fxw_4f432326-1cdb-4caf-a0d0-c25304f63d47/registry-server/0.log" Nov 28 18:42:21 crc kubenswrapper[5024]: I1128 18:42:21.871356 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dk5n8_a540a1fb-d34b-4c55-8262-e355bfc402b7/extract-content/0.log" Nov 28 18:42:21 crc kubenswrapper[5024]: I1128 18:42:21.926385 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dk5n8_a540a1fb-d34b-4c55-8262-e355bfc402b7/extract-utilities/0.log" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.062757 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vnd7q_02049d91-d768-4285-8a95-b88d379bee70/marketplace-operator/0.log" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.108658 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t8jbv" podUID="866e57be-1edf-4898-9146-c88bf111de09" containerName="registry-server" containerID="cri-o://9a979e3e6347db8f22e24570bed441661aa02240e4fb2934f829ab22341ac482" gracePeriod=2 Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.207729 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cl4bm_38ed0b11-7e2e-4592-9ffc-9851bc16e811/extract-utilities/0.log" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.419367 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cl4bm_38ed0b11-7e2e-4592-9ffc-9851bc16e811/extract-content/0.log" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.420293 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cl4bm_38ed0b11-7e2e-4592-9ffc-9851bc16e811/extract-utilities/0.log" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.437323 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cl4bm_38ed0b11-7e2e-4592-9ffc-9851bc16e811/extract-content/0.log" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.586801 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dk5n8_a540a1fb-d34b-4c55-8262-e355bfc402b7/registry-server/0.log" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.701782 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cl4bm_38ed0b11-7e2e-4592-9ffc-9851bc16e811/extract-utilities/0.log" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.702005 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cl4bm_38ed0b11-7e2e-4592-9ffc-9851bc16e811/extract-content/0.log" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.746689 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.829047 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866e57be-1edf-4898-9146-c88bf111de09-catalog-content\") pod \"866e57be-1edf-4898-9146-c88bf111de09\" (UID: \"866e57be-1edf-4898-9146-c88bf111de09\") " Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.829234 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nfcf\" (UniqueName: \"kubernetes.io/projected/866e57be-1edf-4898-9146-c88bf111de09-kube-api-access-5nfcf\") pod \"866e57be-1edf-4898-9146-c88bf111de09\" (UID: \"866e57be-1edf-4898-9146-c88bf111de09\") " Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.829321 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866e57be-1edf-4898-9146-c88bf111de09-utilities\") pod \"866e57be-1edf-4898-9146-c88bf111de09\" (UID: \"866e57be-1edf-4898-9146-c88bf111de09\") " Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.832419 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/866e57be-1edf-4898-9146-c88bf111de09-utilities" (OuterVolumeSpecName: "utilities") pod "866e57be-1edf-4898-9146-c88bf111de09" (UID: "866e57be-1edf-4898-9146-c88bf111de09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.841312 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866e57be-1edf-4898-9146-c88bf111de09-kube-api-access-5nfcf" (OuterVolumeSpecName: "kube-api-access-5nfcf") pod "866e57be-1edf-4898-9146-c88bf111de09" (UID: "866e57be-1edf-4898-9146-c88bf111de09"). InnerVolumeSpecName "kube-api-access-5nfcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.852843 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/866e57be-1edf-4898-9146-c88bf111de09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "866e57be-1edf-4898-9146-c88bf111de09" (UID: "866e57be-1edf-4898-9146-c88bf111de09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.913912 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-t8jbv_866e57be-1edf-4898-9146-c88bf111de09/extract-utilities/0.log" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.932244 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866e57be-1edf-4898-9146-c88bf111de09-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.932269 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866e57be-1edf-4898-9146-c88bf111de09-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.932279 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nfcf\" (UniqueName: \"kubernetes.io/projected/866e57be-1edf-4898-9146-c88bf111de09-kube-api-access-5nfcf\") on node \"crc\" DevicePath \"\"" Nov 28 18:42:22 crc kubenswrapper[5024]: I1128 18:42:22.942842 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cl4bm_38ed0b11-7e2e-4592-9ffc-9851bc16e811/registry-server/0.log" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.068331 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-t8jbv_866e57be-1edf-4898-9146-c88bf111de09/extract-utilities/0.log" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.098812 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-t8jbv_866e57be-1edf-4898-9146-c88bf111de09/extract-content/0.log" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.106612 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-t8jbv_866e57be-1edf-4898-9146-c88bf111de09/extract-content/0.log" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.121087 5024 generic.go:334] "Generic (PLEG): container finished" podID="866e57be-1edf-4898-9146-c88bf111de09" containerID="9a979e3e6347db8f22e24570bed441661aa02240e4fb2934f829ab22341ac482" exitCode=0 Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.121150 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8jbv" event={"ID":"866e57be-1edf-4898-9146-c88bf111de09","Type":"ContainerDied","Data":"9a979e3e6347db8f22e24570bed441661aa02240e4fb2934f829ab22341ac482"} Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.121397 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8jbv" event={"ID":"866e57be-1edf-4898-9146-c88bf111de09","Type":"ContainerDied","Data":"282ce2905dc77e3408c434e6f207f4947886a720c8c09a6672ae6587ef531737"} Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.121162 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8jbv" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.121445 5024 scope.go:117] "RemoveContainer" containerID="9a979e3e6347db8f22e24570bed441661aa02240e4fb2934f829ab22341ac482" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.159886 5024 scope.go:117] "RemoveContainer" containerID="06af63a1134e752b97ecf5a078bfc10e17daeb777b8c087825232bd4062d88c8" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.165539 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8jbv"] Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.179892 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8jbv"] Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.220981 5024 scope.go:117] "RemoveContainer" containerID="b1daed28bf2cecab26be66c4ed84f8a15d60b1886117940465bf9e670fb01568" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.246036 5024 scope.go:117] "RemoveContainer" containerID="9a979e3e6347db8f22e24570bed441661aa02240e4fb2934f829ab22341ac482" Nov 28 18:42:23 crc kubenswrapper[5024]: E1128 18:42:23.246383 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a979e3e6347db8f22e24570bed441661aa02240e4fb2934f829ab22341ac482\": container with ID starting with 9a979e3e6347db8f22e24570bed441661aa02240e4fb2934f829ab22341ac482 not found: ID does not exist" containerID="9a979e3e6347db8f22e24570bed441661aa02240e4fb2934f829ab22341ac482" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.246427 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a979e3e6347db8f22e24570bed441661aa02240e4fb2934f829ab22341ac482"} err="failed to get container status \"9a979e3e6347db8f22e24570bed441661aa02240e4fb2934f829ab22341ac482\": rpc error: code = NotFound desc = could not find container \"9a979e3e6347db8f22e24570bed441661aa02240e4fb2934f829ab22341ac482\": container with ID starting with 9a979e3e6347db8f22e24570bed441661aa02240e4fb2934f829ab22341ac482 not found: ID does not exist" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.246453 5024 scope.go:117] "RemoveContainer" containerID="06af63a1134e752b97ecf5a078bfc10e17daeb777b8c087825232bd4062d88c8" Nov 28 18:42:23 crc kubenswrapper[5024]: E1128 18:42:23.246670 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06af63a1134e752b97ecf5a078bfc10e17daeb777b8c087825232bd4062d88c8\": container with ID starting with 06af63a1134e752b97ecf5a078bfc10e17daeb777b8c087825232bd4062d88c8 not found: ID does not exist" containerID="06af63a1134e752b97ecf5a078bfc10e17daeb777b8c087825232bd4062d88c8" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.246691 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06af63a1134e752b97ecf5a078bfc10e17daeb777b8c087825232bd4062d88c8"} err="failed to get container status \"06af63a1134e752b97ecf5a078bfc10e17daeb777b8c087825232bd4062d88c8\": rpc error: code = NotFound desc = could not find container \"06af63a1134e752b97ecf5a078bfc10e17daeb777b8c087825232bd4062d88c8\": container with ID starting with 06af63a1134e752b97ecf5a078bfc10e17daeb777b8c087825232bd4062d88c8 not found: ID does not exist" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.246705 5024 scope.go:117] "RemoveContainer" containerID="b1daed28bf2cecab26be66c4ed84f8a15d60b1886117940465bf9e670fb01568" Nov 28 18:42:23 crc kubenswrapper[5024]: E1128 18:42:23.246857 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1daed28bf2cecab26be66c4ed84f8a15d60b1886117940465bf9e670fb01568\": container with ID starting with b1daed28bf2cecab26be66c4ed84f8a15d60b1886117940465bf9e670fb01568 not found: ID does not exist" containerID="b1daed28bf2cecab26be66c4ed84f8a15d60b1886117940465bf9e670fb01568" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.246876 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1daed28bf2cecab26be66c4ed84f8a15d60b1886117940465bf9e670fb01568"} err="failed to get container status \"b1daed28bf2cecab26be66c4ed84f8a15d60b1886117940465bf9e670fb01568\": rpc error: code = NotFound desc = could not find container \"b1daed28bf2cecab26be66c4ed84f8a15d60b1886117940465bf9e670fb01568\": container with ID starting with b1daed28bf2cecab26be66c4ed84f8a15d60b1886117940465bf9e670fb01568 not found: ID does not exist" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.431768 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmgv2_82167b6a-2e43-4adb-9b4a-7c4d53f65979/extract-utilities/0.log" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.550807 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmgv2_82167b6a-2e43-4adb-9b4a-7c4d53f65979/extract-utilities/0.log" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.551947 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmgv2_82167b6a-2e43-4adb-9b4a-7c4d53f65979/extract-content/0.log" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.563155 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmgv2_82167b6a-2e43-4adb-9b4a-7c4d53f65979/extract-content/0.log" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.867190 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmgv2_82167b6a-2e43-4adb-9b4a-7c4d53f65979/extract-content/0.log" Nov 28 18:42:23 crc kubenswrapper[5024]: I1128 18:42:23.930575 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmgv2_82167b6a-2e43-4adb-9b4a-7c4d53f65979/extract-utilities/0.log" Nov 28 18:42:24 crc kubenswrapper[5024]: I1128 18:42:24.510250 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866e57be-1edf-4898-9146-c88bf111de09" path="/var/lib/kubelet/pods/866e57be-1edf-4898-9146-c88bf111de09/volumes" Nov 28 18:42:24 crc kubenswrapper[5024]: I1128 18:42:24.875298 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rmgv2_82167b6a-2e43-4adb-9b4a-7c4d53f65979/registry-server/0.log" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.066040 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w4pqq"] Nov 28 18:42:35 crc kubenswrapper[5024]: E1128 18:42:35.069152 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866e57be-1edf-4898-9146-c88bf111de09" containerName="extract-utilities" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.069183 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="866e57be-1edf-4898-9146-c88bf111de09" containerName="extract-utilities" Nov 28 18:42:35 crc kubenswrapper[5024]: E1128 18:42:35.069217 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866e57be-1edf-4898-9146-c88bf111de09" containerName="extract-content" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.069224 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="866e57be-1edf-4898-9146-c88bf111de09" containerName="extract-content" Nov 28 18:42:35 crc kubenswrapper[5024]: E1128 18:42:35.069234 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866e57be-1edf-4898-9146-c88bf111de09" containerName="registry-server" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.069241 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="866e57be-1edf-4898-9146-c88bf111de09" containerName="registry-server" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.069678 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="866e57be-1edf-4898-9146-c88bf111de09" containerName="registry-server" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.071721 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.083305 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w4pqq"] Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.159069 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c9af7a8-1975-4ae5-8e4f-e760d13009e1-utilities\") pod \"community-operators-w4pqq\" (UID: \"7c9af7a8-1975-4ae5-8e4f-e760d13009e1\") " pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.159135 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5lrt\" (UniqueName: \"kubernetes.io/projected/7c9af7a8-1975-4ae5-8e4f-e760d13009e1-kube-api-access-k5lrt\") pod \"community-operators-w4pqq\" (UID: \"7c9af7a8-1975-4ae5-8e4f-e760d13009e1\") " pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.159191 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c9af7a8-1975-4ae5-8e4f-e760d13009e1-catalog-content\") pod \"community-operators-w4pqq\" (UID: \"7c9af7a8-1975-4ae5-8e4f-e760d13009e1\") " pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.261594 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c9af7a8-1975-4ae5-8e4f-e760d13009e1-utilities\") pod \"community-operators-w4pqq\" (UID: \"7c9af7a8-1975-4ae5-8e4f-e760d13009e1\") " pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.261654 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5lrt\" (UniqueName: \"kubernetes.io/projected/7c9af7a8-1975-4ae5-8e4f-e760d13009e1-kube-api-access-k5lrt\") pod \"community-operators-w4pqq\" (UID: \"7c9af7a8-1975-4ae5-8e4f-e760d13009e1\") " pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.261692 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c9af7a8-1975-4ae5-8e4f-e760d13009e1-catalog-content\") pod \"community-operators-w4pqq\" (UID: \"7c9af7a8-1975-4ae5-8e4f-e760d13009e1\") " pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.262307 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c9af7a8-1975-4ae5-8e4f-e760d13009e1-catalog-content\") pod \"community-operators-w4pqq\" (UID: \"7c9af7a8-1975-4ae5-8e4f-e760d13009e1\") " pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.262333 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c9af7a8-1975-4ae5-8e4f-e760d13009e1-utilities\") pod \"community-operators-w4pqq\" (UID: \"7c9af7a8-1975-4ae5-8e4f-e760d13009e1\") " pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.284430 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5lrt\" (UniqueName: \"kubernetes.io/projected/7c9af7a8-1975-4ae5-8e4f-e760d13009e1-kube-api-access-k5lrt\") pod \"community-operators-w4pqq\" (UID: \"7c9af7a8-1975-4ae5-8e4f-e760d13009e1\") " pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.393355 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:35 crc kubenswrapper[5024]: I1128 18:42:35.978486 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w4pqq"] Nov 28 18:42:36 crc kubenswrapper[5024]: I1128 18:42:36.259298 5024 generic.go:334] "Generic (PLEG): container finished" podID="7c9af7a8-1975-4ae5-8e4f-e760d13009e1" containerID="97dbbfc0d499a92ecce9e78d3dfdf9c05f1dba2783d5df796fedf71fe1becf37" exitCode=0 Nov 28 18:42:36 crc kubenswrapper[5024]: I1128 18:42:36.259352 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4pqq" event={"ID":"7c9af7a8-1975-4ae5-8e4f-e760d13009e1","Type":"ContainerDied","Data":"97dbbfc0d499a92ecce9e78d3dfdf9c05f1dba2783d5df796fedf71fe1becf37"} Nov 28 18:42:36 crc kubenswrapper[5024]: I1128 18:42:36.259521 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4pqq" event={"ID":"7c9af7a8-1975-4ae5-8e4f-e760d13009e1","Type":"ContainerStarted","Data":"21e92d02e6848e741309cad4e92fe0fd15537127b83360d57eac50b62589adaf"} Nov 28 18:42:37 crc kubenswrapper[5024]: I1128 18:42:37.033823 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-z5jkn_9f64c6e9-5a4e-4c00-b8c0-f88418c1b290/prometheus-operator/0.log" Nov 28 18:42:37 crc kubenswrapper[5024]: I1128 18:42:37.071374 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7b85f7699d-5nqxc_c22be4d1-2db0-48de-9439-c24282cf63b8/prometheus-operator-admission-webhook/0.log" Nov 28 18:42:37 crc kubenswrapper[5024]: I1128 18:42:37.196699 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7b85f7699d-kg9t2_47a5fd85-fd8e-4b0f-84b0-9c00154e2654/prometheus-operator-admission-webhook/0.log" Nov 28 18:42:37 crc kubenswrapper[5024]: I1128 18:42:37.314176 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-h25jc_7e5a62fe-852d-487a-ae2e-852fc2a21d22/operator/0.log" Nov 28 18:42:37 crc kubenswrapper[5024]: I1128 18:42:37.394807 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-wp9mp_d8e43901-e042-4b90-81ed-194c512d9a90/observability-ui-dashboards/0.log" Nov 28 18:42:37 crc kubenswrapper[5024]: I1128 18:42:37.502260 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-7l9j5_2427c8bc-48b6-42d2-b7fa-3a1493e45095/perses-operator/0.log" Nov 28 18:42:37 crc kubenswrapper[5024]: I1128 18:42:37.565092 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:42:37 crc kubenswrapper[5024]: I1128 18:42:37.565150 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:42:42 crc kubenswrapper[5024]: I1128 18:42:42.330595 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4pqq" event={"ID":"7c9af7a8-1975-4ae5-8e4f-e760d13009e1","Type":"ContainerStarted","Data":"7ea5859f6b811434d6c5d0c67c78022d36504b5c58d8c8f7c6260f6a5d0e51f0"} Nov 28 18:42:43 crc kubenswrapper[5024]: I1128 18:42:43.343864 5024 generic.go:334] "Generic (PLEG): container finished" podID="7c9af7a8-1975-4ae5-8e4f-e760d13009e1" containerID="7ea5859f6b811434d6c5d0c67c78022d36504b5c58d8c8f7c6260f6a5d0e51f0" exitCode=0 Nov 28 18:42:43 crc kubenswrapper[5024]: I1128 18:42:43.343973 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4pqq" event={"ID":"7c9af7a8-1975-4ae5-8e4f-e760d13009e1","Type":"ContainerDied","Data":"7ea5859f6b811434d6c5d0c67c78022d36504b5c58d8c8f7c6260f6a5d0e51f0"} Nov 28 18:42:44 crc kubenswrapper[5024]: I1128 18:42:44.357817 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4pqq" event={"ID":"7c9af7a8-1975-4ae5-8e4f-e760d13009e1","Type":"ContainerStarted","Data":"39143da6b8cd96a757489f0f23442619c8459247b848c06687a48ad2d529fbaf"} Nov 28 18:42:44 crc kubenswrapper[5024]: I1128 18:42:44.382327 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w4pqq" podStartSLOduration=1.617081386 podStartE2EDuration="9.382305499s" podCreationTimestamp="2025-11-28 18:42:35 +0000 UTC" firstStartedPulling="2025-11-28 18:42:36.261206348 +0000 UTC m=+6258.310127253" lastFinishedPulling="2025-11-28 18:42:44.026430461 +0000 UTC m=+6266.075351366" observedRunningTime="2025-11-28 18:42:44.379258952 +0000 UTC m=+6266.428179857" watchObservedRunningTime="2025-11-28 18:42:44.382305499 +0000 UTC m=+6266.431226394" Nov 28 18:42:45 crc kubenswrapper[5024]: I1128 18:42:45.394545 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:45 crc kubenswrapper[5024]: I1128 18:42:45.394591 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:46 crc kubenswrapper[5024]: I1128 18:42:46.454572 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-w4pqq" podUID="7c9af7a8-1975-4ae5-8e4f-e760d13009e1" containerName="registry-server" probeResult="failure" output=< Nov 28 18:42:46 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 18:42:46 crc kubenswrapper[5024]: > Nov 28 18:42:52 crc kubenswrapper[5024]: I1128 18:42:52.451517 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-d7f585bbf-gt482_f89c9ab8-a552-4228-9dbc-2af4129a1be3/kube-rbac-proxy/0.log" Nov 28 18:42:52 crc kubenswrapper[5024]: I1128 18:42:52.575638 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-d7f585bbf-gt482_f89c9ab8-a552-4228-9dbc-2af4129a1be3/manager/0.log" Nov 28 18:42:55 crc kubenswrapper[5024]: I1128 18:42:55.454706 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:55 crc kubenswrapper[5024]: I1128 18:42:55.535508 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w4pqq" Nov 28 18:42:55 crc kubenswrapper[5024]: I1128 18:42:55.645390 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w4pqq"] Nov 28 18:42:55 crc kubenswrapper[5024]: I1128 18:42:55.794389 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dk5n8"] Nov 28 18:42:55 crc kubenswrapper[5024]: I1128 18:42:55.794734 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dk5n8" podUID="a540a1fb-d34b-4c55-8262-e355bfc402b7" containerName="registry-server" containerID="cri-o://8ac169f0229e95353ed44f6e896afe7d658c2b948902f832a43964a89102b4e9" gracePeriod=2 Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.509295 5024 generic.go:334] "Generic (PLEG): container finished" podID="a540a1fb-d34b-4c55-8262-e355bfc402b7" containerID="8ac169f0229e95353ed44f6e896afe7d658c2b948902f832a43964a89102b4e9" exitCode=0 Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.521449 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk5n8" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.523395 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk5n8" event={"ID":"a540a1fb-d34b-4c55-8262-e355bfc402b7","Type":"ContainerDied","Data":"8ac169f0229e95353ed44f6e896afe7d658c2b948902f832a43964a89102b4e9"} Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.523433 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk5n8" event={"ID":"a540a1fb-d34b-4c55-8262-e355bfc402b7","Type":"ContainerDied","Data":"a18c2254b00cafcc7d9c5c2ca5ad7e28a2a6e74a8a96cfc47add745b6c7cfa25"} Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.523467 5024 scope.go:117] "RemoveContainer" containerID="8ac169f0229e95353ed44f6e896afe7d658c2b948902f832a43964a89102b4e9" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.569792 5024 scope.go:117] "RemoveContainer" containerID="7d7894371a3e9f60c10da3419c9c6aa829331c262bb5f9e9b9f4c86355e11dcc" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.605645 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj4z4\" (UniqueName: \"kubernetes.io/projected/a540a1fb-d34b-4c55-8262-e355bfc402b7-kube-api-access-vj4z4\") pod \"a540a1fb-d34b-4c55-8262-e355bfc402b7\" (UID: \"a540a1fb-d34b-4c55-8262-e355bfc402b7\") " Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.605781 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a540a1fb-d34b-4c55-8262-e355bfc402b7-catalog-content\") pod \"a540a1fb-d34b-4c55-8262-e355bfc402b7\" (UID: \"a540a1fb-d34b-4c55-8262-e355bfc402b7\") " Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.605895 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a540a1fb-d34b-4c55-8262-e355bfc402b7-utilities\") pod \"a540a1fb-d34b-4c55-8262-e355bfc402b7\" (UID: \"a540a1fb-d34b-4c55-8262-e355bfc402b7\") " Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.607807 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a540a1fb-d34b-4c55-8262-e355bfc402b7-utilities" (OuterVolumeSpecName: "utilities") pod "a540a1fb-d34b-4c55-8262-e355bfc402b7" (UID: "a540a1fb-d34b-4c55-8262-e355bfc402b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.617076 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a540a1fb-d34b-4c55-8262-e355bfc402b7-kube-api-access-vj4z4" (OuterVolumeSpecName: "kube-api-access-vj4z4") pod "a540a1fb-d34b-4c55-8262-e355bfc402b7" (UID: "a540a1fb-d34b-4c55-8262-e355bfc402b7"). InnerVolumeSpecName "kube-api-access-vj4z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.633425 5024 scope.go:117] "RemoveContainer" containerID="ce3a51275fe98b37a9c586244ea42ef03c3cb451fd00cd16f6832102fe7f1112" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.708933 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a540a1fb-d34b-4c55-8262-e355bfc402b7-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.708968 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj4z4\" (UniqueName: \"kubernetes.io/projected/a540a1fb-d34b-4c55-8262-e355bfc402b7-kube-api-access-vj4z4\") on node \"crc\" DevicePath \"\"" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.730099 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a540a1fb-d34b-4c55-8262-e355bfc402b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a540a1fb-d34b-4c55-8262-e355bfc402b7" (UID: "a540a1fb-d34b-4c55-8262-e355bfc402b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.761100 5024 scope.go:117] "RemoveContainer" containerID="8ac169f0229e95353ed44f6e896afe7d658c2b948902f832a43964a89102b4e9" Nov 28 18:42:56 crc kubenswrapper[5024]: E1128 18:42:56.762416 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ac169f0229e95353ed44f6e896afe7d658c2b948902f832a43964a89102b4e9\": container with ID starting with 8ac169f0229e95353ed44f6e896afe7d658c2b948902f832a43964a89102b4e9 not found: ID does not exist" containerID="8ac169f0229e95353ed44f6e896afe7d658c2b948902f832a43964a89102b4e9" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.762463 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ac169f0229e95353ed44f6e896afe7d658c2b948902f832a43964a89102b4e9"} err="failed to get container status \"8ac169f0229e95353ed44f6e896afe7d658c2b948902f832a43964a89102b4e9\": rpc error: code = NotFound desc = could not find container \"8ac169f0229e95353ed44f6e896afe7d658c2b948902f832a43964a89102b4e9\": container with ID starting with 8ac169f0229e95353ed44f6e896afe7d658c2b948902f832a43964a89102b4e9 not found: ID does not exist" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.762485 5024 scope.go:117] "RemoveContainer" containerID="7d7894371a3e9f60c10da3419c9c6aa829331c262bb5f9e9b9f4c86355e11dcc" Nov 28 18:42:56 crc kubenswrapper[5024]: E1128 18:42:56.769182 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d7894371a3e9f60c10da3419c9c6aa829331c262bb5f9e9b9f4c86355e11dcc\": container with ID starting with 7d7894371a3e9f60c10da3419c9c6aa829331c262bb5f9e9b9f4c86355e11dcc not found: ID does not exist" containerID="7d7894371a3e9f60c10da3419c9c6aa829331c262bb5f9e9b9f4c86355e11dcc" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.769348 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7894371a3e9f60c10da3419c9c6aa829331c262bb5f9e9b9f4c86355e11dcc"} err="failed to get container status \"7d7894371a3e9f60c10da3419c9c6aa829331c262bb5f9e9b9f4c86355e11dcc\": rpc error: code = NotFound desc = could not find container \"7d7894371a3e9f60c10da3419c9c6aa829331c262bb5f9e9b9f4c86355e11dcc\": container with ID starting with 7d7894371a3e9f60c10da3419c9c6aa829331c262bb5f9e9b9f4c86355e11dcc not found: ID does not exist" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.769454 5024 scope.go:117] "RemoveContainer" containerID="ce3a51275fe98b37a9c586244ea42ef03c3cb451fd00cd16f6832102fe7f1112" Nov 28 18:42:56 crc kubenswrapper[5024]: E1128 18:42:56.770518 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce3a51275fe98b37a9c586244ea42ef03c3cb451fd00cd16f6832102fe7f1112\": container with ID starting with ce3a51275fe98b37a9c586244ea42ef03c3cb451fd00cd16f6832102fe7f1112 not found: ID does not exist" containerID="ce3a51275fe98b37a9c586244ea42ef03c3cb451fd00cd16f6832102fe7f1112" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.770829 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce3a51275fe98b37a9c586244ea42ef03c3cb451fd00cd16f6832102fe7f1112"} err="failed to get container status \"ce3a51275fe98b37a9c586244ea42ef03c3cb451fd00cd16f6832102fe7f1112\": rpc error: code = NotFound desc = could not find container \"ce3a51275fe98b37a9c586244ea42ef03c3cb451fd00cd16f6832102fe7f1112\": container with ID starting with ce3a51275fe98b37a9c586244ea42ef03c3cb451fd00cd16f6832102fe7f1112 not found: ID does not exist" Nov 28 18:42:56 crc kubenswrapper[5024]: I1128 18:42:56.811554 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a540a1fb-d34b-4c55-8262-e355bfc402b7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:42:57 crc kubenswrapper[5024]: I1128 18:42:57.524543 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk5n8" Nov 28 18:42:57 crc kubenswrapper[5024]: I1128 18:42:57.571097 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dk5n8"] Nov 28 18:42:57 crc kubenswrapper[5024]: I1128 18:42:57.581824 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dk5n8"] Nov 28 18:42:58 crc kubenswrapper[5024]: I1128 18:42:58.534378 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a540a1fb-d34b-4c55-8262-e355bfc402b7" path="/var/lib/kubelet/pods/a540a1fb-d34b-4c55-8262-e355bfc402b7/volumes" Nov 28 18:43:07 crc kubenswrapper[5024]: I1128 18:43:07.564835 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:43:07 crc kubenswrapper[5024]: I1128 18:43:07.565512 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:43:07 crc kubenswrapper[5024]: I1128 18:43:07.565575 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 18:43:07 crc kubenswrapper[5024]: I1128 18:43:07.567140 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"341f4ee9b8cbef62b36434f9d731f94ba6dabce1bdd9d060f4ec6256f9507c7c"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 18:43:07 crc kubenswrapper[5024]: I1128 18:43:07.567229 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://341f4ee9b8cbef62b36434f9d731f94ba6dabce1bdd9d060f4ec6256f9507c7c" gracePeriod=600 Nov 28 18:43:08 crc kubenswrapper[5024]: I1128 18:43:08.671858 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="341f4ee9b8cbef62b36434f9d731f94ba6dabce1bdd9d060f4ec6256f9507c7c" exitCode=0 Nov 28 18:43:08 crc kubenswrapper[5024]: I1128 18:43:08.672304 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"341f4ee9b8cbef62b36434f9d731f94ba6dabce1bdd9d060f4ec6256f9507c7c"} Nov 28 18:43:08 crc kubenswrapper[5024]: I1128 18:43:08.672330 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerStarted","Data":"a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111"} Nov 28 18:43:08 crc kubenswrapper[5024]: I1128 18:43:08.672348 5024 scope.go:117] "RemoveContainer" containerID="6bde715587dd3e0d899c4c364efe011e2cc05aa675a35dd5c6e16380c6d13b30" Nov 28 18:44:50 crc kubenswrapper[5024]: I1128 18:44:50.945977 5024 generic.go:334] "Generic (PLEG): container finished" podID="309a17c4-130c-4a0e-aa80-7c6254a0f2a4" containerID="67afe77f250ad6e177cf8100c42c2f85afc7b3dad7fd3a23d123dd386f84fc8f" exitCode=0 Nov 28 18:44:50 crc kubenswrapper[5024]: I1128 18:44:50.946114 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l89ps/must-gather-jf5kt" event={"ID":"309a17c4-130c-4a0e-aa80-7c6254a0f2a4","Type":"ContainerDied","Data":"67afe77f250ad6e177cf8100c42c2f85afc7b3dad7fd3a23d123dd386f84fc8f"} Nov 28 18:44:50 crc kubenswrapper[5024]: I1128 18:44:50.947243 5024 scope.go:117] "RemoveContainer" containerID="67afe77f250ad6e177cf8100c42c2f85afc7b3dad7fd3a23d123dd386f84fc8f" Nov 28 18:44:51 crc kubenswrapper[5024]: I1128 18:44:51.386251 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l89ps_must-gather-jf5kt_309a17c4-130c-4a0e-aa80-7c6254a0f2a4/gather/0.log" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.173005 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j"] Nov 28 18:45:00 crc kubenswrapper[5024]: E1128 18:45:00.174359 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a540a1fb-d34b-4c55-8262-e355bfc402b7" containerName="extract-utilities" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.174379 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a540a1fb-d34b-4c55-8262-e355bfc402b7" containerName="extract-utilities" Nov 28 18:45:00 crc kubenswrapper[5024]: E1128 18:45:00.174407 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a540a1fb-d34b-4c55-8262-e355bfc402b7" containerName="extract-content" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.174413 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a540a1fb-d34b-4c55-8262-e355bfc402b7" containerName="extract-content" Nov 28 18:45:00 crc kubenswrapper[5024]: E1128 18:45:00.174428 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a540a1fb-d34b-4c55-8262-e355bfc402b7" containerName="registry-server" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.174434 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="a540a1fb-d34b-4c55-8262-e355bfc402b7" containerName="registry-server" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.174747 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="a540a1fb-d34b-4c55-8262-e355bfc402b7" containerName="registry-server" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.175930 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.181356 5024 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.182404 5024 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.199297 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j"] Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.199474 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7vkb\" (UniqueName: \"kubernetes.io/projected/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-kube-api-access-x7vkb\") pod \"collect-profiles-29405925-dm45j\" (UID: \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.199624 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-secret-volume\") pod \"collect-profiles-29405925-dm45j\" (UID: \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.199654 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-config-volume\") pod \"collect-profiles-29405925-dm45j\" (UID: \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.302290 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7vkb\" (UniqueName: \"kubernetes.io/projected/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-kube-api-access-x7vkb\") pod \"collect-profiles-29405925-dm45j\" (UID: \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.302473 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-secret-volume\") pod \"collect-profiles-29405925-dm45j\" (UID: \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.302505 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-config-volume\") pod \"collect-profiles-29405925-dm45j\" (UID: \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.303896 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-config-volume\") pod \"collect-profiles-29405925-dm45j\" (UID: \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.309392 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-secret-volume\") pod \"collect-profiles-29405925-dm45j\" (UID: \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.318857 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7vkb\" (UniqueName: \"kubernetes.io/projected/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-kube-api-access-x7vkb\") pod \"collect-profiles-29405925-dm45j\" (UID: \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.507305 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.512439 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l89ps/must-gather-jf5kt"] Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.512743 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-l89ps/must-gather-jf5kt" podUID="309a17c4-130c-4a0e-aa80-7c6254a0f2a4" containerName="copy" containerID="cri-o://a5bc0a93b08454cff277013188e04899984a2292195692f971779b924613b7c5" gracePeriod=2 Nov 28 18:45:00 crc kubenswrapper[5024]: I1128 18:45:00.515266 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l89ps/must-gather-jf5kt"] Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.040262 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l89ps_must-gather-jf5kt_309a17c4-130c-4a0e-aa80-7c6254a0f2a4/copy/0.log" Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.041465 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/must-gather-jf5kt" Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.051897 5024 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l89ps_must-gather-jf5kt_309a17c4-130c-4a0e-aa80-7c6254a0f2a4/copy/0.log" Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.052429 5024 generic.go:334] "Generic (PLEG): container finished" podID="309a17c4-130c-4a0e-aa80-7c6254a0f2a4" containerID="a5bc0a93b08454cff277013188e04899984a2292195692f971779b924613b7c5" exitCode=143 Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.052544 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l89ps/must-gather-jf5kt" Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.052563 5024 scope.go:117] "RemoveContainer" containerID="a5bc0a93b08454cff277013188e04899984a2292195692f971779b924613b7c5" Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.082693 5024 scope.go:117] "RemoveContainer" containerID="67afe77f250ad6e177cf8100c42c2f85afc7b3dad7fd3a23d123dd386f84fc8f" Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.127173 5024 scope.go:117] "RemoveContainer" containerID="a5bc0a93b08454cff277013188e04899984a2292195692f971779b924613b7c5" Nov 28 18:45:01 crc kubenswrapper[5024]: E1128 18:45:01.127728 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5bc0a93b08454cff277013188e04899984a2292195692f971779b924613b7c5\": container with ID starting with a5bc0a93b08454cff277013188e04899984a2292195692f971779b924613b7c5 not found: ID does not exist" containerID="a5bc0a93b08454cff277013188e04899984a2292195692f971779b924613b7c5" Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.127771 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5bc0a93b08454cff277013188e04899984a2292195692f971779b924613b7c5"} err="failed to get container status \"a5bc0a93b08454cff277013188e04899984a2292195692f971779b924613b7c5\": rpc error: code = NotFound desc = could not find container \"a5bc0a93b08454cff277013188e04899984a2292195692f971779b924613b7c5\": container with ID starting with a5bc0a93b08454cff277013188e04899984a2292195692f971779b924613b7c5 not found: ID does not exist" Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.127799 5024 scope.go:117] "RemoveContainer" containerID="67afe77f250ad6e177cf8100c42c2f85afc7b3dad7fd3a23d123dd386f84fc8f" Nov 28 18:45:01 crc kubenswrapper[5024]: E1128 18:45:01.128327 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67afe77f250ad6e177cf8100c42c2f85afc7b3dad7fd3a23d123dd386f84fc8f\": container with ID starting with 67afe77f250ad6e177cf8100c42c2f85afc7b3dad7fd3a23d123dd386f84fc8f not found: ID does not exist" containerID="67afe77f250ad6e177cf8100c42c2f85afc7b3dad7fd3a23d123dd386f84fc8f" Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.128399 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67afe77f250ad6e177cf8100c42c2f85afc7b3dad7fd3a23d123dd386f84fc8f"} err="failed to get container status \"67afe77f250ad6e177cf8100c42c2f85afc7b3dad7fd3a23d123dd386f84fc8f\": rpc error: code = NotFound desc = could not find container \"67afe77f250ad6e177cf8100c42c2f85afc7b3dad7fd3a23d123dd386f84fc8f\": container with ID starting with 67afe77f250ad6e177cf8100c42c2f85afc7b3dad7fd3a23d123dd386f84fc8f not found: ID does not exist" Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.164712 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j"] Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.222813 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/309a17c4-130c-4a0e-aa80-7c6254a0f2a4-must-gather-output\") pod \"309a17c4-130c-4a0e-aa80-7c6254a0f2a4\" (UID: \"309a17c4-130c-4a0e-aa80-7c6254a0f2a4\") " Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.222863 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrrl6\" (UniqueName: \"kubernetes.io/projected/309a17c4-130c-4a0e-aa80-7c6254a0f2a4-kube-api-access-mrrl6\") pod \"309a17c4-130c-4a0e-aa80-7c6254a0f2a4\" (UID: \"309a17c4-130c-4a0e-aa80-7c6254a0f2a4\") " Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.229323 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/309a17c4-130c-4a0e-aa80-7c6254a0f2a4-kube-api-access-mrrl6" (OuterVolumeSpecName: "kube-api-access-mrrl6") pod "309a17c4-130c-4a0e-aa80-7c6254a0f2a4" (UID: "309a17c4-130c-4a0e-aa80-7c6254a0f2a4"). InnerVolumeSpecName "kube-api-access-mrrl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.328482 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrrl6\" (UniqueName: \"kubernetes.io/projected/309a17c4-130c-4a0e-aa80-7c6254a0f2a4-kube-api-access-mrrl6\") on node \"crc\" DevicePath \"\"" Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.445999 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/309a17c4-130c-4a0e-aa80-7c6254a0f2a4-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "309a17c4-130c-4a0e-aa80-7c6254a0f2a4" (UID: "309a17c4-130c-4a0e-aa80-7c6254a0f2a4"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:45:01 crc kubenswrapper[5024]: I1128 18:45:01.532787 5024 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/309a17c4-130c-4a0e-aa80-7c6254a0f2a4-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 28 18:45:02 crc kubenswrapper[5024]: I1128 18:45:02.065278 5024 generic.go:334] "Generic (PLEG): container finished" podID="f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a" containerID="7a9fb4d0db3575fbd8dfdcdb0930870fa6d9dfea80b58ef26d832f610f054e99" exitCode=0 Nov 28 18:45:02 crc kubenswrapper[5024]: I1128 18:45:02.065313 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" event={"ID":"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a","Type":"ContainerDied","Data":"7a9fb4d0db3575fbd8dfdcdb0930870fa6d9dfea80b58ef26d832f610f054e99"} Nov 28 18:45:02 crc kubenswrapper[5024]: I1128 18:45:02.065342 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" event={"ID":"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a","Type":"ContainerStarted","Data":"706d59dd134806b1236099ac845dd3785b46f87e553f9a84804c90c17edcf6ea"} Nov 28 18:45:02 crc kubenswrapper[5024]: I1128 18:45:02.541655 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="309a17c4-130c-4a0e-aa80-7c6254a0f2a4" path="/var/lib/kubelet/pods/309a17c4-130c-4a0e-aa80-7c6254a0f2a4/volumes" Nov 28 18:45:03 crc kubenswrapper[5024]: I1128 18:45:03.607429 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" Nov 28 18:45:03 crc kubenswrapper[5024]: I1128 18:45:03.688938 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-config-volume\") pod \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\" (UID: \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\") " Nov 28 18:45:03 crc kubenswrapper[5024]: I1128 18:45:03.689458 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7vkb\" (UniqueName: \"kubernetes.io/projected/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-kube-api-access-x7vkb\") pod \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\" (UID: \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\") " Nov 28 18:45:03 crc kubenswrapper[5024]: I1128 18:45:03.689583 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-config-volume" (OuterVolumeSpecName: "config-volume") pod "f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a" (UID: "f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 18:45:03 crc kubenswrapper[5024]: I1128 18:45:03.689610 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-secret-volume\") pod \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\" (UID: \"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a\") " Nov 28 18:45:03 crc kubenswrapper[5024]: I1128 18:45:03.690134 5024 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 18:45:03 crc kubenswrapper[5024]: I1128 18:45:03.703352 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a" (UID: "f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 18:45:03 crc kubenswrapper[5024]: I1128 18:45:03.703501 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-kube-api-access-x7vkb" (OuterVolumeSpecName: "kube-api-access-x7vkb") pod "f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a" (UID: "f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a"). InnerVolumeSpecName "kube-api-access-x7vkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:45:03 crc kubenswrapper[5024]: I1128 18:45:03.798846 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7vkb\" (UniqueName: \"kubernetes.io/projected/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-kube-api-access-x7vkb\") on node \"crc\" DevicePath \"\"" Nov 28 18:45:03 crc kubenswrapper[5024]: I1128 18:45:03.798881 5024 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 18:45:04 crc kubenswrapper[5024]: I1128 18:45:04.090721 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" event={"ID":"f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a","Type":"ContainerDied","Data":"706d59dd134806b1236099ac845dd3785b46f87e553f9a84804c90c17edcf6ea"} Nov 28 18:45:04 crc kubenswrapper[5024]: I1128 18:45:04.091007 5024 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="706d59dd134806b1236099ac845dd3785b46f87e553f9a84804c90c17edcf6ea" Nov 28 18:45:04 crc kubenswrapper[5024]: I1128 18:45:04.090752 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405925-dm45j" Nov 28 18:45:04 crc kubenswrapper[5024]: I1128 18:45:04.680668 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d"] Nov 28 18:45:04 crc kubenswrapper[5024]: I1128 18:45:04.691482 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405880-w8v7d"] Nov 28 18:45:06 crc kubenswrapper[5024]: I1128 18:45:06.512004 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d20d89c0-adc6-4d04-976c-454c89ec777e" path="/var/lib/kubelet/pods/d20d89c0-adc6-4d04-976c-454c89ec777e/volumes" Nov 28 18:45:07 crc kubenswrapper[5024]: I1128 18:45:07.564959 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:45:07 crc kubenswrapper[5024]: I1128 18:45:07.565276 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:45:11 crc kubenswrapper[5024]: I1128 18:45:11.097614 5024 scope.go:117] "RemoveContainer" containerID="07cf9c4eff298992655f4349da12b63066f04153d4b3755438db208a41e0be6d" Nov 28 18:45:11 crc kubenswrapper[5024]: I1128 18:45:11.131304 5024 scope.go:117] "RemoveContainer" containerID="4100caf407d9a6904d6b4a273a778305405a8b19c7106d4b10f0895b18c742f2" Nov 28 18:45:11 crc kubenswrapper[5024]: I1128 18:45:11.191754 5024 scope.go:117] "RemoveContainer" containerID="04de4ce793ef9dae183496ccc1572dda3d4d67d4709d19b832b649d62a0669bd" Nov 28 18:45:37 crc kubenswrapper[5024]: I1128 18:45:37.565257 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:45:37 crc kubenswrapper[5024]: I1128 18:45:37.565772 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:46:07 crc kubenswrapper[5024]: I1128 18:46:07.565062 5024 patch_prober.go:28] interesting pod/machine-config-daemon-ps8mf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 18:46:07 crc kubenswrapper[5024]: I1128 18:46:07.566902 5024 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 18:46:07 crc kubenswrapper[5024]: I1128 18:46:07.567087 5024 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" Nov 28 18:46:07 crc kubenswrapper[5024]: I1128 18:46:07.568230 5024 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111"} pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 18:46:07 crc kubenswrapper[5024]: I1128 18:46:07.568374 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerName="machine-config-daemon" containerID="cri-o://a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" gracePeriod=600 Nov 28 18:46:07 crc kubenswrapper[5024]: E1128 18:46:07.690415 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:46:07 crc kubenswrapper[5024]: I1128 18:46:07.875424 5024 generic.go:334] "Generic (PLEG): container finished" podID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" exitCode=0 Nov 28 18:46:07 crc kubenswrapper[5024]: I1128 18:46:07.875456 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" event={"ID":"77bf51a4-547d-4a7b-b841-59f4fbacbd97","Type":"ContainerDied","Data":"a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111"} Nov 28 18:46:07 crc kubenswrapper[5024]: I1128 18:46:07.875496 5024 scope.go:117] "RemoveContainer" containerID="341f4ee9b8cbef62b36434f9d731f94ba6dabce1bdd9d060f4ec6256f9507c7c" Nov 28 18:46:07 crc kubenswrapper[5024]: I1128 18:46:07.877207 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:46:07 crc kubenswrapper[5024]: E1128 18:46:07.877626 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:46:20 crc kubenswrapper[5024]: I1128 18:46:20.498775 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:46:20 crc kubenswrapper[5024]: E1128 18:46:20.499555 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:46:32 crc kubenswrapper[5024]: I1128 18:46:32.500812 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:46:32 crc kubenswrapper[5024]: E1128 18:46:32.501944 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:46:47 crc kubenswrapper[5024]: I1128 18:46:47.498377 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:46:47 crc kubenswrapper[5024]: E1128 18:46:47.499225 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:47:00 crc kubenswrapper[5024]: I1128 18:47:00.499242 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:47:00 crc kubenswrapper[5024]: E1128 18:47:00.500413 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:47:13 crc kubenswrapper[5024]: I1128 18:47:13.497837 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:47:13 crc kubenswrapper[5024]: E1128 18:47:13.499617 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:47:26 crc kubenswrapper[5024]: I1128 18:47:26.498604 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:47:26 crc kubenswrapper[5024]: E1128 18:47:26.499550 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:47:37 crc kubenswrapper[5024]: I1128 18:47:37.497695 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:47:37 crc kubenswrapper[5024]: E1128 18:47:37.498499 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:47:51 crc kubenswrapper[5024]: I1128 18:47:51.498379 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:47:51 crc kubenswrapper[5024]: E1128 18:47:51.499302 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:48:06 crc kubenswrapper[5024]: I1128 18:48:06.500285 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:48:06 crc kubenswrapper[5024]: E1128 18:48:06.501670 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:48:21 crc kubenswrapper[5024]: I1128 18:48:21.498699 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:48:21 crc kubenswrapper[5024]: E1128 18:48:21.499836 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:48:35 crc kubenswrapper[5024]: I1128 18:48:35.497618 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:48:35 crc kubenswrapper[5024]: E1128 18:48:35.498538 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:48:48 crc kubenswrapper[5024]: I1128 18:48:48.516077 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:48:48 crc kubenswrapper[5024]: E1128 18:48:48.517316 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:49:00 crc kubenswrapper[5024]: I1128 18:49:00.499281 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:49:00 crc kubenswrapper[5024]: E1128 18:49:00.500222 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:49:13 crc kubenswrapper[5024]: I1128 18:49:13.498263 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:49:13 crc kubenswrapper[5024]: E1128 18:49:13.499043 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:49:25 crc kubenswrapper[5024]: I1128 18:49:25.498401 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:49:25 crc kubenswrapper[5024]: E1128 18:49:25.499788 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:49:37 crc kubenswrapper[5024]: I1128 18:49:37.499218 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:49:37 crc kubenswrapper[5024]: E1128 18:49:37.499953 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:49:51 crc kubenswrapper[5024]: I1128 18:49:51.497936 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:49:51 crc kubenswrapper[5024]: E1128 18:49:51.498630 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:50:02 crc kubenswrapper[5024]: I1128 18:50:02.499432 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:50:02 crc kubenswrapper[5024]: E1128 18:50:02.509166 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.425822 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jtxjz"] Nov 28 18:50:03 crc kubenswrapper[5024]: E1128 18:50:03.426468 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309a17c4-130c-4a0e-aa80-7c6254a0f2a4" containerName="gather" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.426484 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="309a17c4-130c-4a0e-aa80-7c6254a0f2a4" containerName="gather" Nov 28 18:50:03 crc kubenswrapper[5024]: E1128 18:50:03.426517 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a" containerName="collect-profiles" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.426525 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a" containerName="collect-profiles" Nov 28 18:50:03 crc kubenswrapper[5024]: E1128 18:50:03.426538 5024 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309a17c4-130c-4a0e-aa80-7c6254a0f2a4" containerName="copy" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.426544 5024 state_mem.go:107] "Deleted CPUSet assignment" podUID="309a17c4-130c-4a0e-aa80-7c6254a0f2a4" containerName="copy" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.426818 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7da2e1c-c6a1-4be5-8ae6-4b98546aaf9a" containerName="collect-profiles" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.426835 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="309a17c4-130c-4a0e-aa80-7c6254a0f2a4" containerName="copy" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.426852 5024 memory_manager.go:354] "RemoveStaleState removing state" podUID="309a17c4-130c-4a0e-aa80-7c6254a0f2a4" containerName="gather" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.428950 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.442633 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jtxjz"] Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.500432 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3964e05f-982a-4e70-b295-cba4735eadf9-catalog-content\") pod \"redhat-operators-jtxjz\" (UID: \"3964e05f-982a-4e70-b295-cba4735eadf9\") " pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.501479 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3964e05f-982a-4e70-b295-cba4735eadf9-utilities\") pod \"redhat-operators-jtxjz\" (UID: \"3964e05f-982a-4e70-b295-cba4735eadf9\") " pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.501809 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjthc\" (UniqueName: \"kubernetes.io/projected/3964e05f-982a-4e70-b295-cba4735eadf9-kube-api-access-vjthc\") pod \"redhat-operators-jtxjz\" (UID: \"3964e05f-982a-4e70-b295-cba4735eadf9\") " pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.603594 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjthc\" (UniqueName: \"kubernetes.io/projected/3964e05f-982a-4e70-b295-cba4735eadf9-kube-api-access-vjthc\") pod \"redhat-operators-jtxjz\" (UID: \"3964e05f-982a-4e70-b295-cba4735eadf9\") " pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.603706 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3964e05f-982a-4e70-b295-cba4735eadf9-catalog-content\") pod \"redhat-operators-jtxjz\" (UID: \"3964e05f-982a-4e70-b295-cba4735eadf9\") " pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.603739 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3964e05f-982a-4e70-b295-cba4735eadf9-utilities\") pod \"redhat-operators-jtxjz\" (UID: \"3964e05f-982a-4e70-b295-cba4735eadf9\") " pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.604709 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3964e05f-982a-4e70-b295-cba4735eadf9-catalog-content\") pod \"redhat-operators-jtxjz\" (UID: \"3964e05f-982a-4e70-b295-cba4735eadf9\") " pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.604769 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3964e05f-982a-4e70-b295-cba4735eadf9-utilities\") pod \"redhat-operators-jtxjz\" (UID: \"3964e05f-982a-4e70-b295-cba4735eadf9\") " pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.633189 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjthc\" (UniqueName: \"kubernetes.io/projected/3964e05f-982a-4e70-b295-cba4735eadf9-kube-api-access-vjthc\") pod \"redhat-operators-jtxjz\" (UID: \"3964e05f-982a-4e70-b295-cba4735eadf9\") " pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:03 crc kubenswrapper[5024]: I1128 18:50:03.760691 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:04 crc kubenswrapper[5024]: I1128 18:50:04.309907 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jtxjz"] Nov 28 18:50:04 crc kubenswrapper[5024]: I1128 18:50:04.919280 5024 generic.go:334] "Generic (PLEG): container finished" podID="3964e05f-982a-4e70-b295-cba4735eadf9" containerID="120b81b824d489f155f0060a5797cf3f3be5f5ba702eff059f7aed642eaf8eaa" exitCode=0 Nov 28 18:50:04 crc kubenswrapper[5024]: I1128 18:50:04.919577 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtxjz" event={"ID":"3964e05f-982a-4e70-b295-cba4735eadf9","Type":"ContainerDied","Data":"120b81b824d489f155f0060a5797cf3f3be5f5ba702eff059f7aed642eaf8eaa"} Nov 28 18:50:04 crc kubenswrapper[5024]: I1128 18:50:04.919614 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtxjz" event={"ID":"3964e05f-982a-4e70-b295-cba4735eadf9","Type":"ContainerStarted","Data":"e8536643967191057a63aa97739d6968871096314872ecd6ea7de9f039c1be64"} Nov 28 18:50:04 crc kubenswrapper[5024]: I1128 18:50:04.921443 5024 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 18:50:06 crc kubenswrapper[5024]: I1128 18:50:06.942101 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtxjz" event={"ID":"3964e05f-982a-4e70-b295-cba4735eadf9","Type":"ContainerStarted","Data":"83702d5671dcac9fbdd1a233b55347c73291b03b7e0773e041ac17660e08d401"} Nov 28 18:50:09 crc kubenswrapper[5024]: I1128 18:50:09.997219 5024 generic.go:334] "Generic (PLEG): container finished" podID="3964e05f-982a-4e70-b295-cba4735eadf9" containerID="83702d5671dcac9fbdd1a233b55347c73291b03b7e0773e041ac17660e08d401" exitCode=0 Nov 28 18:50:09 crc kubenswrapper[5024]: I1128 18:50:09.997332 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtxjz" event={"ID":"3964e05f-982a-4e70-b295-cba4735eadf9","Type":"ContainerDied","Data":"83702d5671dcac9fbdd1a233b55347c73291b03b7e0773e041ac17660e08d401"} Nov 28 18:50:11 crc kubenswrapper[5024]: I1128 18:50:11.015725 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtxjz" event={"ID":"3964e05f-982a-4e70-b295-cba4735eadf9","Type":"ContainerStarted","Data":"83cc9d04b463b60a06dfa76d56b12f84144ba843b3f6606665dc2baca9998c0f"} Nov 28 18:50:11 crc kubenswrapper[5024]: I1128 18:50:11.039704 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jtxjz" podStartSLOduration=2.224208357 podStartE2EDuration="8.039672423s" podCreationTimestamp="2025-11-28 18:50:03 +0000 UTC" firstStartedPulling="2025-11-28 18:50:04.921163835 +0000 UTC m=+6706.970084740" lastFinishedPulling="2025-11-28 18:50:10.736627901 +0000 UTC m=+6712.785548806" observedRunningTime="2025-11-28 18:50:11.03817796 +0000 UTC m=+6713.087098865" watchObservedRunningTime="2025-11-28 18:50:11.039672423 +0000 UTC m=+6713.088593368" Nov 28 18:50:13 crc kubenswrapper[5024]: I1128 18:50:13.761767 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:13 crc kubenswrapper[5024]: I1128 18:50:13.762467 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:14 crc kubenswrapper[5024]: I1128 18:50:14.499605 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:50:14 crc kubenswrapper[5024]: E1128 18:50:14.500838 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:50:14 crc kubenswrapper[5024]: I1128 18:50:14.551287 5024 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t26ws"] Nov 28 18:50:14 crc kubenswrapper[5024]: I1128 18:50:14.555960 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:14 crc kubenswrapper[5024]: I1128 18:50:14.596562 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t26ws"] Nov 28 18:50:14 crc kubenswrapper[5024]: I1128 18:50:14.760780 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4nwg\" (UniqueName: \"kubernetes.io/projected/d3109b76-d7b8-4143-99cf-1221b716bc87-kube-api-access-w4nwg\") pod \"certified-operators-t26ws\" (UID: \"d3109b76-d7b8-4143-99cf-1221b716bc87\") " pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:14 crc kubenswrapper[5024]: I1128 18:50:14.761001 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3109b76-d7b8-4143-99cf-1221b716bc87-utilities\") pod \"certified-operators-t26ws\" (UID: \"d3109b76-d7b8-4143-99cf-1221b716bc87\") " pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:14 crc kubenswrapper[5024]: I1128 18:50:14.761259 5024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3109b76-d7b8-4143-99cf-1221b716bc87-catalog-content\") pod \"certified-operators-t26ws\" (UID: \"d3109b76-d7b8-4143-99cf-1221b716bc87\") " pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:14 crc kubenswrapper[5024]: I1128 18:50:14.836523 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jtxjz" podUID="3964e05f-982a-4e70-b295-cba4735eadf9" containerName="registry-server" probeResult="failure" output=< Nov 28 18:50:14 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 18:50:14 crc kubenswrapper[5024]: > Nov 28 18:50:14 crc kubenswrapper[5024]: I1128 18:50:14.864240 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4nwg\" (UniqueName: \"kubernetes.io/projected/d3109b76-d7b8-4143-99cf-1221b716bc87-kube-api-access-w4nwg\") pod \"certified-operators-t26ws\" (UID: \"d3109b76-d7b8-4143-99cf-1221b716bc87\") " pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:14 crc kubenswrapper[5024]: I1128 18:50:14.864369 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3109b76-d7b8-4143-99cf-1221b716bc87-utilities\") pod \"certified-operators-t26ws\" (UID: \"d3109b76-d7b8-4143-99cf-1221b716bc87\") " pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:14 crc kubenswrapper[5024]: I1128 18:50:14.864420 5024 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3109b76-d7b8-4143-99cf-1221b716bc87-catalog-content\") pod \"certified-operators-t26ws\" (UID: \"d3109b76-d7b8-4143-99cf-1221b716bc87\") " pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:14 crc kubenswrapper[5024]: I1128 18:50:14.864943 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3109b76-d7b8-4143-99cf-1221b716bc87-catalog-content\") pod \"certified-operators-t26ws\" (UID: \"d3109b76-d7b8-4143-99cf-1221b716bc87\") " pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:14 crc kubenswrapper[5024]: I1128 18:50:14.865370 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3109b76-d7b8-4143-99cf-1221b716bc87-utilities\") pod \"certified-operators-t26ws\" (UID: \"d3109b76-d7b8-4143-99cf-1221b716bc87\") " pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:14 crc kubenswrapper[5024]: I1128 18:50:14.897658 5024 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4nwg\" (UniqueName: \"kubernetes.io/projected/d3109b76-d7b8-4143-99cf-1221b716bc87-kube-api-access-w4nwg\") pod \"certified-operators-t26ws\" (UID: \"d3109b76-d7b8-4143-99cf-1221b716bc87\") " pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:15 crc kubenswrapper[5024]: I1128 18:50:15.191780 5024 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:15 crc kubenswrapper[5024]: I1128 18:50:15.788488 5024 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t26ws"] Nov 28 18:50:16 crc kubenswrapper[5024]: I1128 18:50:16.081567 5024 generic.go:334] "Generic (PLEG): container finished" podID="d3109b76-d7b8-4143-99cf-1221b716bc87" containerID="48a4b8b6dab4956dc8affbb1f211967d888370cf7e0b3b41b1c12ce3e757b0f2" exitCode=0 Nov 28 18:50:16 crc kubenswrapper[5024]: I1128 18:50:16.081621 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t26ws" event={"ID":"d3109b76-d7b8-4143-99cf-1221b716bc87","Type":"ContainerDied","Data":"48a4b8b6dab4956dc8affbb1f211967d888370cf7e0b3b41b1c12ce3e757b0f2"} Nov 28 18:50:16 crc kubenswrapper[5024]: I1128 18:50:16.081647 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t26ws" event={"ID":"d3109b76-d7b8-4143-99cf-1221b716bc87","Type":"ContainerStarted","Data":"7b4db876e0f92f788cb9f5cbf69dc723591c4c85efeabe9cf8a8e6388924b3cd"} Nov 28 18:50:18 crc kubenswrapper[5024]: I1128 18:50:18.112082 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t26ws" event={"ID":"d3109b76-d7b8-4143-99cf-1221b716bc87","Type":"ContainerStarted","Data":"85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63"} Nov 28 18:50:19 crc kubenswrapper[5024]: I1128 18:50:19.128360 5024 generic.go:334] "Generic (PLEG): container finished" podID="d3109b76-d7b8-4143-99cf-1221b716bc87" containerID="85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63" exitCode=0 Nov 28 18:50:19 crc kubenswrapper[5024]: I1128 18:50:19.128440 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t26ws" event={"ID":"d3109b76-d7b8-4143-99cf-1221b716bc87","Type":"ContainerDied","Data":"85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63"} Nov 28 18:50:20 crc kubenswrapper[5024]: I1128 18:50:20.141692 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t26ws" event={"ID":"d3109b76-d7b8-4143-99cf-1221b716bc87","Type":"ContainerStarted","Data":"a5441f5c950889e9949f9fbf6af9b49e180c833c1fd55d43dcefd72c4445b121"} Nov 28 18:50:20 crc kubenswrapper[5024]: I1128 18:50:20.169660 5024 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t26ws" podStartSLOduration=2.544136231 podStartE2EDuration="6.169635888s" podCreationTimestamp="2025-11-28 18:50:14 +0000 UTC" firstStartedPulling="2025-11-28 18:50:16.085314381 +0000 UTC m=+6718.134235286" lastFinishedPulling="2025-11-28 18:50:19.710814038 +0000 UTC m=+6721.759734943" observedRunningTime="2025-11-28 18:50:20.159886309 +0000 UTC m=+6722.208807214" watchObservedRunningTime="2025-11-28 18:50:20.169635888 +0000 UTC m=+6722.218556793" Nov 28 18:50:23 crc kubenswrapper[5024]: E1128 18:50:23.026154 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3109b76_d7b8_4143_99cf_1221b716bc87.slice/crio-conmon-85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63.scope\": RecentStats: unable to find data in memory cache]" Nov 28 18:50:23 crc kubenswrapper[5024]: E1128 18:50:23.913691 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3109b76_d7b8_4143_99cf_1221b716bc87.slice/crio-conmon-85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63.scope\": RecentStats: unable to find data in memory cache]" Nov 28 18:50:24 crc kubenswrapper[5024]: I1128 18:50:24.814300 5024 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jtxjz" podUID="3964e05f-982a-4e70-b295-cba4735eadf9" containerName="registry-server" probeResult="failure" output=< Nov 28 18:50:24 crc kubenswrapper[5024]: timeout: failed to connect service ":50051" within 1s Nov 28 18:50:24 crc kubenswrapper[5024]: > Nov 28 18:50:25 crc kubenswrapper[5024]: I1128 18:50:25.192091 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:25 crc kubenswrapper[5024]: I1128 18:50:25.192293 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:25 crc kubenswrapper[5024]: I1128 18:50:25.250517 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:25 crc kubenswrapper[5024]: I1128 18:50:25.498417 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:50:25 crc kubenswrapper[5024]: E1128 18:50:25.499071 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:50:26 crc kubenswrapper[5024]: I1128 18:50:26.250777 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:26 crc kubenswrapper[5024]: I1128 18:50:26.301614 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t26ws"] Nov 28 18:50:28 crc kubenswrapper[5024]: I1128 18:50:28.225383 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t26ws" podUID="d3109b76-d7b8-4143-99cf-1221b716bc87" containerName="registry-server" containerID="cri-o://a5441f5c950889e9949f9fbf6af9b49e180c833c1fd55d43dcefd72c4445b121" gracePeriod=2 Nov 28 18:50:28 crc kubenswrapper[5024]: I1128 18:50:28.853551 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:28 crc kubenswrapper[5024]: I1128 18:50:28.972798 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3109b76-d7b8-4143-99cf-1221b716bc87-catalog-content\") pod \"d3109b76-d7b8-4143-99cf-1221b716bc87\" (UID: \"d3109b76-d7b8-4143-99cf-1221b716bc87\") " Nov 28 18:50:28 crc kubenswrapper[5024]: I1128 18:50:28.972892 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4nwg\" (UniqueName: \"kubernetes.io/projected/d3109b76-d7b8-4143-99cf-1221b716bc87-kube-api-access-w4nwg\") pod \"d3109b76-d7b8-4143-99cf-1221b716bc87\" (UID: \"d3109b76-d7b8-4143-99cf-1221b716bc87\") " Nov 28 18:50:28 crc kubenswrapper[5024]: I1128 18:50:28.973093 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3109b76-d7b8-4143-99cf-1221b716bc87-utilities\") pod \"d3109b76-d7b8-4143-99cf-1221b716bc87\" (UID: \"d3109b76-d7b8-4143-99cf-1221b716bc87\") " Nov 28 18:50:28 crc kubenswrapper[5024]: I1128 18:50:28.973682 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3109b76-d7b8-4143-99cf-1221b716bc87-utilities" (OuterVolumeSpecName: "utilities") pod "d3109b76-d7b8-4143-99cf-1221b716bc87" (UID: "d3109b76-d7b8-4143-99cf-1221b716bc87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:50:28 crc kubenswrapper[5024]: I1128 18:50:28.974719 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3109b76-d7b8-4143-99cf-1221b716bc87-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:50:28 crc kubenswrapper[5024]: I1128 18:50:28.978887 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3109b76-d7b8-4143-99cf-1221b716bc87-kube-api-access-w4nwg" (OuterVolumeSpecName: "kube-api-access-w4nwg") pod "d3109b76-d7b8-4143-99cf-1221b716bc87" (UID: "d3109b76-d7b8-4143-99cf-1221b716bc87"). InnerVolumeSpecName "kube-api-access-w4nwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.025381 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3109b76-d7b8-4143-99cf-1221b716bc87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3109b76-d7b8-4143-99cf-1221b716bc87" (UID: "d3109b76-d7b8-4143-99cf-1221b716bc87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.078248 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3109b76-d7b8-4143-99cf-1221b716bc87-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.078651 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4nwg\" (UniqueName: \"kubernetes.io/projected/d3109b76-d7b8-4143-99cf-1221b716bc87-kube-api-access-w4nwg\") on node \"crc\" DevicePath \"\"" Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.239334 5024 generic.go:334] "Generic (PLEG): container finished" podID="d3109b76-d7b8-4143-99cf-1221b716bc87" containerID="a5441f5c950889e9949f9fbf6af9b49e180c833c1fd55d43dcefd72c4445b121" exitCode=0 Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.239384 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t26ws" event={"ID":"d3109b76-d7b8-4143-99cf-1221b716bc87","Type":"ContainerDied","Data":"a5441f5c950889e9949f9fbf6af9b49e180c833c1fd55d43dcefd72c4445b121"} Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.239397 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t26ws" Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.239414 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t26ws" event={"ID":"d3109b76-d7b8-4143-99cf-1221b716bc87","Type":"ContainerDied","Data":"7b4db876e0f92f788cb9f5cbf69dc723591c4c85efeabe9cf8a8e6388924b3cd"} Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.239439 5024 scope.go:117] "RemoveContainer" containerID="a5441f5c950889e9949f9fbf6af9b49e180c833c1fd55d43dcefd72c4445b121" Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.279299 5024 scope.go:117] "RemoveContainer" containerID="85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63" Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.291818 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t26ws"] Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.306423 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t26ws"] Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.321552 5024 scope.go:117] "RemoveContainer" containerID="48a4b8b6dab4956dc8affbb1f211967d888370cf7e0b3b41b1c12ce3e757b0f2" Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.355673 5024 scope.go:117] "RemoveContainer" containerID="a5441f5c950889e9949f9fbf6af9b49e180c833c1fd55d43dcefd72c4445b121" Nov 28 18:50:29 crc kubenswrapper[5024]: E1128 18:50:29.356585 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5441f5c950889e9949f9fbf6af9b49e180c833c1fd55d43dcefd72c4445b121\": container with ID starting with a5441f5c950889e9949f9fbf6af9b49e180c833c1fd55d43dcefd72c4445b121 not found: ID does not exist" containerID="a5441f5c950889e9949f9fbf6af9b49e180c833c1fd55d43dcefd72c4445b121" Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.356644 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5441f5c950889e9949f9fbf6af9b49e180c833c1fd55d43dcefd72c4445b121"} err="failed to get container status \"a5441f5c950889e9949f9fbf6af9b49e180c833c1fd55d43dcefd72c4445b121\": rpc error: code = NotFound desc = could not find container \"a5441f5c950889e9949f9fbf6af9b49e180c833c1fd55d43dcefd72c4445b121\": container with ID starting with a5441f5c950889e9949f9fbf6af9b49e180c833c1fd55d43dcefd72c4445b121 not found: ID does not exist" Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.356684 5024 scope.go:117] "RemoveContainer" containerID="85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63" Nov 28 18:50:29 crc kubenswrapper[5024]: E1128 18:50:29.357043 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63\": container with ID starting with 85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63 not found: ID does not exist" containerID="85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63" Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.357066 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63"} err="failed to get container status \"85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63\": rpc error: code = NotFound desc = could not find container \"85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63\": container with ID starting with 85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63 not found: ID does not exist" Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.357079 5024 scope.go:117] "RemoveContainer" containerID="48a4b8b6dab4956dc8affbb1f211967d888370cf7e0b3b41b1c12ce3e757b0f2" Nov 28 18:50:29 crc kubenswrapper[5024]: E1128 18:50:29.357336 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48a4b8b6dab4956dc8affbb1f211967d888370cf7e0b3b41b1c12ce3e757b0f2\": container with ID starting with 48a4b8b6dab4956dc8affbb1f211967d888370cf7e0b3b41b1c12ce3e757b0f2 not found: ID does not exist" containerID="48a4b8b6dab4956dc8affbb1f211967d888370cf7e0b3b41b1c12ce3e757b0f2" Nov 28 18:50:29 crc kubenswrapper[5024]: I1128 18:50:29.357375 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a4b8b6dab4956dc8affbb1f211967d888370cf7e0b3b41b1c12ce3e757b0f2"} err="failed to get container status \"48a4b8b6dab4956dc8affbb1f211967d888370cf7e0b3b41b1c12ce3e757b0f2\": rpc error: code = NotFound desc = could not find container \"48a4b8b6dab4956dc8affbb1f211967d888370cf7e0b3b41b1c12ce3e757b0f2\": container with ID starting with 48a4b8b6dab4956dc8affbb1f211967d888370cf7e0b3b41b1c12ce3e757b0f2 not found: ID does not exist" Nov 28 18:50:30 crc kubenswrapper[5024]: I1128 18:50:30.517630 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3109b76-d7b8-4143-99cf-1221b716bc87" path="/var/lib/kubelet/pods/d3109b76-d7b8-4143-99cf-1221b716bc87/volumes" Nov 28 18:50:33 crc kubenswrapper[5024]: E1128 18:50:33.390907 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3109b76_d7b8_4143_99cf_1221b716bc87.slice/crio-conmon-85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63.scope\": RecentStats: unable to find data in memory cache]" Nov 28 18:50:33 crc kubenswrapper[5024]: I1128 18:50:33.817081 5024 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:33 crc kubenswrapper[5024]: I1128 18:50:33.899592 5024 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:34 crc kubenswrapper[5024]: I1128 18:50:34.612708 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jtxjz"] Nov 28 18:50:35 crc kubenswrapper[5024]: I1128 18:50:35.320227 5024 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jtxjz" podUID="3964e05f-982a-4e70-b295-cba4735eadf9" containerName="registry-server" containerID="cri-o://83cc9d04b463b60a06dfa76d56b12f84144ba843b3f6606665dc2baca9998c0f" gracePeriod=2 Nov 28 18:50:35 crc kubenswrapper[5024]: I1128 18:50:35.900278 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.067249 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3964e05f-982a-4e70-b295-cba4735eadf9-catalog-content\") pod \"3964e05f-982a-4e70-b295-cba4735eadf9\" (UID: \"3964e05f-982a-4e70-b295-cba4735eadf9\") " Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.067455 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjthc\" (UniqueName: \"kubernetes.io/projected/3964e05f-982a-4e70-b295-cba4735eadf9-kube-api-access-vjthc\") pod \"3964e05f-982a-4e70-b295-cba4735eadf9\" (UID: \"3964e05f-982a-4e70-b295-cba4735eadf9\") " Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.067535 5024 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3964e05f-982a-4e70-b295-cba4735eadf9-utilities\") pod \"3964e05f-982a-4e70-b295-cba4735eadf9\" (UID: \"3964e05f-982a-4e70-b295-cba4735eadf9\") " Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.068144 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3964e05f-982a-4e70-b295-cba4735eadf9-utilities" (OuterVolumeSpecName: "utilities") pod "3964e05f-982a-4e70-b295-cba4735eadf9" (UID: "3964e05f-982a-4e70-b295-cba4735eadf9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.072728 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3964e05f-982a-4e70-b295-cba4735eadf9-kube-api-access-vjthc" (OuterVolumeSpecName: "kube-api-access-vjthc") pod "3964e05f-982a-4e70-b295-cba4735eadf9" (UID: "3964e05f-982a-4e70-b295-cba4735eadf9"). InnerVolumeSpecName "kube-api-access-vjthc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.170820 5024 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjthc\" (UniqueName: \"kubernetes.io/projected/3964e05f-982a-4e70-b295-cba4735eadf9-kube-api-access-vjthc\") on node \"crc\" DevicePath \"\"" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.170861 5024 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3964e05f-982a-4e70-b295-cba4735eadf9-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.172552 5024 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3964e05f-982a-4e70-b295-cba4735eadf9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3964e05f-982a-4e70-b295-cba4735eadf9" (UID: "3964e05f-982a-4e70-b295-cba4735eadf9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.273429 5024 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3964e05f-982a-4e70-b295-cba4735eadf9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.332164 5024 generic.go:334] "Generic (PLEG): container finished" podID="3964e05f-982a-4e70-b295-cba4735eadf9" containerID="83cc9d04b463b60a06dfa76d56b12f84144ba843b3f6606665dc2baca9998c0f" exitCode=0 Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.332222 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtxjz" event={"ID":"3964e05f-982a-4e70-b295-cba4735eadf9","Type":"ContainerDied","Data":"83cc9d04b463b60a06dfa76d56b12f84144ba843b3f6606665dc2baca9998c0f"} Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.332296 5024 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtxjz" event={"ID":"3964e05f-982a-4e70-b295-cba4735eadf9","Type":"ContainerDied","Data":"e8536643967191057a63aa97739d6968871096314872ecd6ea7de9f039c1be64"} Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.332321 5024 scope.go:117] "RemoveContainer" containerID="83cc9d04b463b60a06dfa76d56b12f84144ba843b3f6606665dc2baca9998c0f" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.332241 5024 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtxjz" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.366095 5024 scope.go:117] "RemoveContainer" containerID="83702d5671dcac9fbdd1a233b55347c73291b03b7e0773e041ac17660e08d401" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.382605 5024 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jtxjz"] Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.394717 5024 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jtxjz"] Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.395759 5024 scope.go:117] "RemoveContainer" containerID="120b81b824d489f155f0060a5797cf3f3be5f5ba702eff059f7aed642eaf8eaa" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.495213 5024 scope.go:117] "RemoveContainer" containerID="83cc9d04b463b60a06dfa76d56b12f84144ba843b3f6606665dc2baca9998c0f" Nov 28 18:50:36 crc kubenswrapper[5024]: E1128 18:50:36.495786 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83cc9d04b463b60a06dfa76d56b12f84144ba843b3f6606665dc2baca9998c0f\": container with ID starting with 83cc9d04b463b60a06dfa76d56b12f84144ba843b3f6606665dc2baca9998c0f not found: ID does not exist" containerID="83cc9d04b463b60a06dfa76d56b12f84144ba843b3f6606665dc2baca9998c0f" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.495838 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83cc9d04b463b60a06dfa76d56b12f84144ba843b3f6606665dc2baca9998c0f"} err="failed to get container status \"83cc9d04b463b60a06dfa76d56b12f84144ba843b3f6606665dc2baca9998c0f\": rpc error: code = NotFound desc = could not find container \"83cc9d04b463b60a06dfa76d56b12f84144ba843b3f6606665dc2baca9998c0f\": container with ID starting with 83cc9d04b463b60a06dfa76d56b12f84144ba843b3f6606665dc2baca9998c0f not found: ID does not exist" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.495865 5024 scope.go:117] "RemoveContainer" containerID="83702d5671dcac9fbdd1a233b55347c73291b03b7e0773e041ac17660e08d401" Nov 28 18:50:36 crc kubenswrapper[5024]: E1128 18:50:36.496241 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83702d5671dcac9fbdd1a233b55347c73291b03b7e0773e041ac17660e08d401\": container with ID starting with 83702d5671dcac9fbdd1a233b55347c73291b03b7e0773e041ac17660e08d401 not found: ID does not exist" containerID="83702d5671dcac9fbdd1a233b55347c73291b03b7e0773e041ac17660e08d401" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.496290 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83702d5671dcac9fbdd1a233b55347c73291b03b7e0773e041ac17660e08d401"} err="failed to get container status \"83702d5671dcac9fbdd1a233b55347c73291b03b7e0773e041ac17660e08d401\": rpc error: code = NotFound desc = could not find container \"83702d5671dcac9fbdd1a233b55347c73291b03b7e0773e041ac17660e08d401\": container with ID starting with 83702d5671dcac9fbdd1a233b55347c73291b03b7e0773e041ac17660e08d401 not found: ID does not exist" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.496320 5024 scope.go:117] "RemoveContainer" containerID="120b81b824d489f155f0060a5797cf3f3be5f5ba702eff059f7aed642eaf8eaa" Nov 28 18:50:36 crc kubenswrapper[5024]: E1128 18:50:36.496696 5024 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"120b81b824d489f155f0060a5797cf3f3be5f5ba702eff059f7aed642eaf8eaa\": container with ID starting with 120b81b824d489f155f0060a5797cf3f3be5f5ba702eff059f7aed642eaf8eaa not found: ID does not exist" containerID="120b81b824d489f155f0060a5797cf3f3be5f5ba702eff059f7aed642eaf8eaa" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.496724 5024 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"120b81b824d489f155f0060a5797cf3f3be5f5ba702eff059f7aed642eaf8eaa"} err="failed to get container status \"120b81b824d489f155f0060a5797cf3f3be5f5ba702eff059f7aed642eaf8eaa\": rpc error: code = NotFound desc = could not find container \"120b81b824d489f155f0060a5797cf3f3be5f5ba702eff059f7aed642eaf8eaa\": container with ID starting with 120b81b824d489f155f0060a5797cf3f3be5f5ba702eff059f7aed642eaf8eaa not found: ID does not exist" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.498538 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:50:36 crc kubenswrapper[5024]: E1128 18:50:36.499108 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:50:36 crc kubenswrapper[5024]: I1128 18:50:36.515152 5024 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3964e05f-982a-4e70-b295-cba4735eadf9" path="/var/lib/kubelet/pods/3964e05f-982a-4e70-b295-cba4735eadf9/volumes" Nov 28 18:50:39 crc kubenswrapper[5024]: E1128 18:50:39.199065 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3109b76_d7b8_4143_99cf_1221b716bc87.slice/crio-conmon-85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63.scope\": RecentStats: unable to find data in memory cache]" Nov 28 18:50:43 crc kubenswrapper[5024]: E1128 18:50:43.446060 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3109b76_d7b8_4143_99cf_1221b716bc87.slice/crio-conmon-85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63.scope\": RecentStats: unable to find data in memory cache]" Nov 28 18:50:47 crc kubenswrapper[5024]: I1128 18:50:47.499058 5024 scope.go:117] "RemoveContainer" containerID="a2c17a11bbac9a6fc4bd0394ec1d0bee7d8f83d3c583081008d23a419e134111" Nov 28 18:50:47 crc kubenswrapper[5024]: E1128 18:50:47.500191 5024 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ps8mf_openshift-machine-config-operator(77bf51a4-547d-4a7b-b841-59f4fbacbd97)\"" pod="openshift-machine-config-operator/machine-config-daemon-ps8mf" podUID="77bf51a4-547d-4a7b-b841-59f4fbacbd97" Nov 28 18:50:48 crc kubenswrapper[5024]: E1128 18:50:48.286390 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3109b76_d7b8_4143_99cf_1221b716bc87.slice/crio-conmon-85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63.scope\": RecentStats: unable to find data in memory cache]" Nov 28 18:50:48 crc kubenswrapper[5024]: E1128 18:50:48.286441 5024 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3109b76_d7b8_4143_99cf_1221b716bc87.slice/crio-conmon-85a4073db269b1614d9d3e28b9db25e72c517dfde3ed40926c392dd58813aa63.scope\": RecentStats: unable to find data in memory cache]"